Sample records for accelerator physics code

  1. Computational Accelerator Physics. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisognano, J.J.; Mondelli, A.A.

    1997-04-01

    The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less

  2. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  3. LEGO: A modular accelerator design code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Donald, M.; Irwin, J.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, themore » code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.« less

  4. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    DOE PAGES

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...

    2016-07-12

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less

  5. Code comparison for accelerator design and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary inmore » these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.« less

  6. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  7. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  8. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model.more » The code is guilt atop the Python interpreter language.« less

  9. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  10. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  11. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  12. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  13. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  14. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  15. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  16. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  17. Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siemann, R.H.; /SLAC

    Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.

  18. Physics and engineering design of the accelerator and electron dump for SPIDER

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator

  19. Accelerator science in medical physics.

    PubMed

    Peach, K; Wilson, P; Jones, B

    2011-12-01

    The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future.

  20. Chaotic dynamics in accelerator physics

    NASA Astrophysics Data System (ADS)

    Cary, J. R.

    1992-11-01

    Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.

  1. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  2. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  3. Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi

    A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less

  4. Compensation Techniques in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayed, Hisham Kamal

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Twomore » problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.« less

  5. The Influence of Accelerator Science on Physics Research

    NASA Astrophysics Data System (ADS)

    Haussecker, Enzo F.; Chao, Alexander W.

    2011-06-01

    We evaluate accelerator science in the context of its contributions to the physics community. We address the problem of quantifying these contributions and present a scheme for a numerical evaluation of them. We show by using a statistical sample of important developments in modern physics that accelerator science has influenced 28% of post-1938 physicists and also 28% of post-1938 physics research. We also examine how the influence of accelerator science has evolved over time, and show that on average it has contributed to a physics Nobel Prize-winning research every 2.9 years.

  6. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    DOE PAGES

    Luo, Yun

    2015-08-29

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  7. Further Studies of the NRL Collective Particle Accelerator VIA Numerical Modeling with the MAGIC Code.

    DTIC Science & Technology

    1984-08-01

    COLLFCTIVF PAPTTCLE ACCELERATOR VIA NUMERICAL MODFLINC WITH THF MAGIC CODE Robert 1. Darker Auqust 19F4 Final Report for Period I April. qI84 - 30...NUMERICAL MODELING WITH THE MAGIC CODE Robert 3. Barker August 1984 Final Report for Period 1 April 1984 - 30 September 1984 Prepared for: Scientific...Collective Final Report Particle Accelerator VIA Numerical Modeling with April 1 - September-30, 1984 MAGIC Code. 6. PERFORMING ORG. REPORT NUMBER MRC/WDC-R

  8. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  9. High Energy Density Physics and Exotic Acceleration Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowan, T.; /General Atomics, San Diego; Colby, E.

    2005-09-27

    The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And wemore » saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is

  10. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  11. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  12. Physical activities to enhance an understanding of acceleration

    NASA Astrophysics Data System (ADS)

    Lee, S. A.

    2006-03-01

    On the basis of their everyday experiences, students have developed an understanding of many of the concepts of mechanics by the time they take their first physics course. However, an accurate understanding of acceleration remains elusive. Many students have difficulties distinguishing between velocity and acceleration. In this report, a set of physical activities to highlight the differences between acceleration and velocity are described. These activities involve running and walking on sand (such as an outdoor volleyball court).

  13. Advanced Computing Tools and Models for Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  14. Fifty years of accelerator based physics at Chalk River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKay, John W.

    1999-04-26

    The Chalk River Laboratories of Atomic Energy of Canada Ltd. was a major centre for Accelerator based physics for the last fifty years. As early as 1946, nuclear structure studies were started on Cockroft-Walton accelerators. A series of accelerators followed, including the world's first Tandem, and the MP Tandem, Superconducting Cyclotron (TASCC) facility that was opened in 1986. The nuclear physics program was shut down in 1996. This paper will describe some of the highlights of the accelerators and the research of the laboratory.

  15. Physics and engineering studies on the MITICA accelerator: comparison among possible design solutions

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.

    2011-09-01

    Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.

  16. Analytical tools in accelerator physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litvinenko, V.N.

    2010-09-01

    This paper is a sub-set of my lectures presented in the Accelerator Physics course (USPAS, Santa Rosa, California, January 14-25, 2008). It is based on my notes I wrote during period from 1976 to 1979 in Novosibirsk. Only few copies (in Russian) were distributed to my colleagues in Novosibirsk Institute of Nuclear Physics. The goal of these notes is a complete description starting from the arbitrary reference orbit, explicit expressions for 4-potential and accelerator Hamiltonian and finishing with parameterization with action and angle variables. To a large degree follow logic developed in Theory of Cyclic Particle Accelerators by A.A.Kolmensky andmore » A.N.Lebedev [Kolomensky], but going beyond the book in a number of directions. One of unusual feature is these notes use of matrix function and Sylvester formula for calculating matrices of arbitrary elements. Teaching the USPAS course motivated me to translate significant part of my notes into the English. I also included some introductory materials following Classical Theory of Fields by L.D. Landau and E.M. Liftsitz [Landau]. A large number of short notes covering various techniques are placed in the Appendices.« less

  17. Theoretical and Experimental Studies in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenzweig, James

    This report describes research supported by the US Dept. of Energy Office of High Energy Physics (OHEP), performed by the UCLA Particle Beam Physics Laboratory (PBPL). The UCLA PBPL has, over the last two decades-plus, played a critical role in the development of advanced accelerators, fundamental beam physics, and new applications enabled by these thrusts, such as new types of accelerator-based light sources. As the PBPL mission is broad it is natural that it has been grown within the context of the accelerator science and technology stewardship of the OHEP. Indeed, steady OHEP support for the program has always beenmore » central to the success of the PBPL; it has provided stability, and above all has set the over-arching themes for our research directions, which have producing over 500 publications (>120 in high level journals). While other agency support has grown notably in recent years, permitting more vigorous pursuit of the program, it is transient by comparison. Beyond permitting program growth in a time of flat OHEP budgets, the influence of other agency missions is found in push to adapt advanced accelerator methods to applications, in light of the success the field has had in proof-of-principle experiments supported first by the DoE OHEP. This three-pronged PBPL program — advanced accelerators, fundamental beam physics and technology, and revolutionary applications — has produced a generation of students that have had a profound affect on the US accelerator physics community. PBPL graduates, numbering 28 in total, form a significant population group in the accelerator community, playing key roles as university faculty, scientific leaders in national labs (two have been named Panofsky Fellows at SLAC), and vigorous proponents of industrial application of accelerators. Indeed, the development of advanced RF, optical and magnet technology at the PBPL has led directly to the spin-off company, RadiaBeam Technologies, now a leading industrial

  18. MAPA: an interactive accelerator design code with GUI

    NASA Astrophysics Data System (ADS)

    Bruhwiler, David L.; Cary, John R.; Shasharina, Svetlana G.

    1999-06-01

    The MAPA code is an interactive accelerator modeling and design tool with an X/Motif GUI. MAPA has been developed in C++ and makes full use of object-oriented features. We present an overview of its features and describe how users can independently extend the capabilities of the entire application, including the GUI. For example, a user can define a new model for a focusing or accelerating element. If the appropriate form is followed, and the new element is "registered" with a single line in the specified file, then the GUI will fully support this user-defined element type after it has been compiled and then linked to the existing application. In particular, the GUI will bring up windows for modifying any relevant parameters of the new element type. At present, one can use the GUI for phase space tracking, finding fixed points and generating line plots for the Twiss parameters, the dispersion and the accelerator geometry. The user can define new types of simulations which the GUI will automatically support by providing a menu option to execute the simulation and subsequently rendering line plots of the resulting data.

  19. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  20. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  1. Multipactor Physics, Acceleration, and Breakdown in Dielectric-Loaded Accelerating Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Richard P.; Gold, Steven H.

    2016-07-01

    The objective of this 3-year program is to study the physics issues associated with rf acceleration in dielectric-loaded accelerating (DLA) structures, with a focus on the key issue of multipactor loading, which has been found to cause very significant rf power loss in DLA structures whenever the rf pulsewidth exceeds the multipactor risetime (~10 ns). The experiments are carried out in the X-band magnicon laboratory at the Naval Research Laboratory (NRL) in collaboration with Argonne National Laboratory (ANL) and Euclid Techlabs LLC, who develop the test structures with support from the DoE SBIR program. There are two main elements inmore » the research program: (1) high-power tests of DLA structures using the magnicon output (20 MW @11.4 GHz), and (2) tests of electron acceleration in DLA structures using relativistic electrons from a compact X-band accelerator. The work during this period has focused on a study of the use of an axial magnetic field to suppress multipactor in DLA structures, with several new high power tests carried out at NRL, and on preparation of the accelerator for the electron acceleration experiments.« less

  2. Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code

    NASA Astrophysics Data System (ADS)

    Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.

    2012-12-01

    We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.

  3. An introduction to the physics of high energy accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.A.; Syphers, J.J.

    1993-01-01

    This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledgemore » about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.« less

  4. Processing Motion: Using Code to Teach Newtonian Physics

    NASA Astrophysics Data System (ADS)

    Massey, M. Ryan

    Prior to instruction, students often possess a common-sense view of motion, which is inconsistent with Newtonian physics. Effective physics lessons therefore involve conceptual change. To provide a theoretical explanation for concepts and how they change, the triangulation model brings together key attributes of prototypes, exemplars, theories, Bayesian learning, ontological categories, and the causal model theory. The triangulation model provides a theoretical rationale for why coding is a viable method for physics instruction. As an experiment, thirty-two adolescent students participated in summer coding academies to learn how to design Newtonian simulations. Conceptual and attitudinal data was collected using the Force Concept Inventory and the Colorado Learning Attitudes about Science Survey. Results suggest that coding is an effective means for teaching Newtonian physics.

  5. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  6. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As

  7. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of themore » physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.« less

  8. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  9. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  10. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  11. NASA's Microgravity Fluid Physics Program: Tolerability to Residual Accelerations

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond

    1998-01-01

    An overview of the NASA microgravity fluid physics program is presented. The necessary quality of a reduced-gravity environment in terms of tolerable residual acceleration or g levels is a concern that is inevitably raised for each new microgravity experiment. Methodologies have been reported in the literature that provide guidance in obtaining reasonable estimates of residual acceleration sensitivity for a broad range of fluid physics phenomena. Furthermore, a relatively large and growing database of microgravity experiments that have successfully been performed in terrestrial reduced gravity facilities and orbiting platforms exists. Similarity of experimental conditions and hardware, in some cases, lead to new experiments adopting prior experiments g-requirements. Rationale applied to other experiments can, in principle, be a valuable guide to assist new Principal Investigators, PIs, in determining the residual acceleration tolerability of their flight experiments. The availability of g-requirements rationale from prior (mu)g experiments is discussed. An example of establishing g tolerability requirements is demonstrated, using a current microgravity fluid physics flight experiment. The Fluids and Combustion Facility (FCF) which is currently manifested on the US Laboratory of the International Space Station (ISS) will provide opportunities for fluid physics and combustion experiments throughout the life of the ISS. Although the FCF is not intended to accommodate all fluid physics experiments, it is expected to meet the science requirements of approximately 80% of the new PIs that enter the microgravity fluid physics program. The residual acceleration requirements for the FCF fluid physics experiments are based on a set of fourteen reference fluid physics experiments which are discussed.

  12. Reliability enhancement of Navier-Stokes codes through convergence acceleration

    NASA Technical Reports Server (NTRS)

    Merkle, Charles L.; Dulikravich, George S.

    1995-01-01

    Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.

  13. The Scanning Electron Microscope As An Accelerator For The Undergraduate Advanced Physics Laboratory

    NASA Astrophysics Data System (ADS)

    Peterson, Randolph S.; Berggren, Karl K.; Mondol, Mark

    2011-06-01

    Few universities or colleges have an accelerator for use with advanced physics laboratories, but many of these institutions have a scanning electron microscope (SEM) on site, often in the biology department. As an accelerator for the undergraduate, advanced physics laboratory, the SEM is an excellent substitute for an ion accelerator. Although there are no nuclear physics experiments that can be performed with a typical 30 kV SEM, there is an opportunity for experimental work on accelerator physics, atomic physics, electron-solid interactions, and the basics of modern e-beam lithography.

  14. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  15. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  16. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  17. Resource Letter AFHEP-1: Accelerators for the Future of High-Energy Physics

    NASA Astrophysics Data System (ADS)

    Barletta, William A.

    2012-02-01

    This Resource Letter provides a guide to literature concerning the development of accelerators for the future of high-energy physics. Research articles, books, and Internet resources are cited for the following topics: motivation for future accelerators, present accelerators for high-energy physics, possible future machine, and laboratory and collaboration websites.

  18. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  19. Better physical activity classification using smartphone acceleration sensor.

    PubMed

    Arif, Muhammad; Bilal, Mohsin; Kattan, Ahmed; Ahamed, S Iqbal

    2014-09-01

    Obesity is becoming one of the serious problems for the health of worldwide population. Social interactions on mobile phones and computers via internet through social e-networks are one of the major causes of lack of physical activities. For the health specialist, it is important to track the record of physical activities of the obese or overweight patients to supervise weight loss control. In this study, acceleration sensor present in the smartphone is used to monitor the physical activity of the user. Physical activities including Walking, Jogging, Sitting, Standing, Walking upstairs and Walking downstairs are classified. Time domain features are extracted from the acceleration data recorded by smartphone during different physical activities. Time and space complexity of the whole framework is done by optimal feature subset selection and pruning of instances. Classification results of six physical activities are reported in this paper. Using simple time domain features, 99 % classification accuracy is achieved. Furthermore, attributes subset selection is used to remove the redundant features and to minimize the time complexity of the algorithm. A subset of 30 features produced more than 98 % classification accuracy for the six physical activities.

  20. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  1. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  2. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be... basic color for designating caution and for marking physical hazards such as: Striking against...

  3. Particle acceleration, transport and turbulence in cosmic and heliospheric physics

    NASA Technical Reports Server (NTRS)

    Matthaeus, W.

    1992-01-01

    In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.

  4. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  5. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  6. TORBEAM 2.0, a paraxial beam tracing code for electron-cyclotron beams in fusion plasmas for extended physics applications

    NASA Astrophysics Data System (ADS)

    Poli, E.; Bock, A.; Lochbrunner, M.; Maj, O.; Reich, M.; Snicker, A.; Stegmeir, A.; Volpe, F.; Bertelli, N.; Bilato, R.; Conway, G. D.; Farina, D.; Felici, F.; Figini, L.; Fischer, R.; Galperti, C.; Happel, T.; Lin-Liu, Y. R.; Marushchenko, N. B.; Mszanowski, U.; Poli, F. M.; Stober, J.; Westerhof, E.; Zille, R.; Peeters, A. G.; Pereverzev, G. V.

    2018-04-01

    The paraxial WKB code TORBEAM (Poli, 2001) is widely used for the description of electron-cyclotron waves in fusion plasmas, retaining diffraction effects through the solution of a set of ordinary differential equations. With respect to its original form, the code has undergone significant transformations and extensions, in terms of both the physical model and the spectrum of applications. The code has been rewritten in Fortran 90 and transformed into a library, which can be called from within different (not necessarily Fortran-based) workflows. The models for both absorption and current drive have been extended, including e.g. fully-relativistic calculation of the absorption coefficient, momentum conservation in electron-electron collisions and the contribution of more than one harmonic to current drive. The code can be run also for reflectometry applications, with relativistic corrections for the electron mass. Formulas that provide the coupling between the reflected beam and the receiver have been developed. Accelerated versions of the code are available, with the reduced physics goal of inferring the location of maximum absorption (including or not the total driven current) for a given setting of the launcher mirrors. Optionally, plasma volumes within given flux surfaces and corresponding values of minimum and maximum magnetic field can be provided externally to speed up the calculation of full driven-current profiles. These can be employed in real-time control algorithms or for fast data analysis.

  7. Accelerating Innovation: How Nuclear Physics Benefits Us All

    DOE R&D Accomplishments Database

    2011-01-01

    Innovation has been accelerated by nuclear physics in the areas of improving our health; making the world safer; electricity, environment, archaeology; better computers; contributions to industry; and training the next generation of innovators.

  8. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The Com

  9. CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems

    DTIC Science & Technology

    2018-04-19

    AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4099 5c.  PROGRAM ELEMENT...Institute of Technology Kanpur India Final Report for AOARD Grant “CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical

  10. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  11. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  12. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  13. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  14. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  15. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  16. Establishing confidence in complex physics codes: Art or science?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trucano, T.

    1997-12-31

    The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The authormore » elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.« less

  17. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  18. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  19. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  20. Female acceleration tolerance: effects of menstrual state and physical condition.

    PubMed

    Heaps, C L; Fischer, M D; Hill, R C

    1997-06-01

    The literature contains a paucity of information on female tolerance to high sustained acceleration. With women now flying high-performance aircraft, gender-specific factors that may affect female acceleration tolerance have become increasingly important. The purpose of this investigation was to determine how menstrual state and physical condition affect acceleration tolerance. We hypothesized the menstrual cycle would have no effect on acceleration tolerance and that a positive correlation would exist between physical fitness level and tolerance to high sustained acceleration. Centrifuge exposures on 8 female subjects consisted of a relaxed gradual-onset run (0.1 G.s-1) to the visual endpoint, a rapid-onset run (6 G.s-1) to +5 GZ for 15 s, and a +4.5 to +7 GZ simulated aerial combat maneuver (SACM) to physical exhaustion. Acceleration tolerance data were collected at onset of menstruation and 1, 2 and 3 weeks following the onset for two complete menstrual cycles. On separate days, body composition, anaerobic power output and peak oxygen uptake were determined. Retrospective data from 10 male subjects who had performed the +4.5 to +7 GZ SACM were analyzed and compared to these data. Analysis of variance revealed no significant difference in relaxed tolerance or SACM duration between the four selected menstrual cycle time points. Time-to-fatigue on the +4.5 to +7 GZ SACM was positively (p < or = 0.05) correlated with absolute fat-free mass (r = 0.87) and anaerobic power production (r = 0.76) in female subjects. However, when these variables were adjusted for total body mass, the significant correlations no longer existed. No correlation was found between SACM duration and absolute (L min-1) nor relative (ml.kg-1.min-1) aerobic fitness. Time-to-fatigue during the SACM was not significantly different between male and female subjects (250 +/- 97 and 246 +/- 149 s, respectively).

  1. Cosmic Acceleration, Dark Energy, and Fundamental Physics

    NASA Astrophysics Data System (ADS)

    Turner, Michael S.; Huterer, Dragan

    2007-11-01

    A web of interlocking observations has established that the expansion of the Universe is speeding up and not slowing, revealing the presence of some form of repulsive gravity. Within the context of general relativity the cause of cosmic acceleration is a highly elastic ( p˜-ρ), very smooth form of energy called “dark energy” accounting for about 75% of the Universe. The “simplest” explanation for dark energy is the zero-point energy density associated with the quantum vacuum; however, all estimates for its value are many orders-of-magnitude too large. Other ideas for dark energy include a very light scalar field or a tangled network of topological defects. An alternate explanation invokes gravitational physics beyond general relativity. Observations and experiments underway and more precise cosmological measurements and laboratory experiments planned for the next decade will test whether or not dark energy is the quantum energy of the vacuum or something more exotic, and whether or not general relativity can self consistently explain cosmic acceleration. Dark energy is the most conspicuous example of physics beyond the standard model and perhaps the most profound mystery in all of science.

  2. Chemical vs. Physical Acceleration of Cement Hydration

    PubMed Central

    Bentz, Dale P.; Zunino, Franco; Lootens, Didier

    2016-01-01

    Cold weather concreting often requires the use of chemical accelerators to speed up the hydration reactions of the cement, so that setting and early-age strength development will occur in a timely manner. While calcium chloride (dihydrate – CaCl2·2H2O) is the most commonly used chemical accelerator, recent research using fine limestone powders has indicated their high proficiency for physically accelerating early-age hydration and reducing setting times. This paper presents a comparative study of the efficiency of these two approaches in accelerating hydration (as assessed via isothermal calorimetry), reducing setting times (Vicat needle), and increasing early-age mortar cube strength (1 d and 7 d). Both the CaCl2 and the fine limestone powder are used to replace a portion of the finest sand in the mortar mixtures, while keeping both the water-to-cement ratio and volume fractions of water and cement constant. Studies are conducted at 73.4 °F (23°C) and 50 °F (10 °C), so that activation energies can be estimated for the hydration and setting processes. Because the mechanisms of acceleration of the CaCl2 and limestone powder are different, a hybrid mixture with 1 % CaCl2 and 20 % limestone powder (by mass of cement) is also investigated. Both technologies are found to be viable options for reducing setting times and increasing early-age strengths, and it is hoped that concrete producers and contractors will consider the addition of fine limestone powder to their toolbox of techniques for assuring performance in cold weather and other concreting conditions where acceleration may be needed. PMID:28077884

  3. Prediction of scaling physics laws for proton acceleration with extended parameter space of the NIF ARC

    NASA Astrophysics Data System (ADS)

    Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy

    2017-10-01

    The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  4. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  5. Lecture Notes on Topics in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Alex W.

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures ismore » not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.« less

  6. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  7. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  8. Accelerator-based techniques for the support of senior-level undergraduate physics laboratories

    NASA Astrophysics Data System (ADS)

    Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.

    2001-07-01

    Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.

  9. EDITORIAL: Laser and plasma accelerators Laser and plasma accelerators

    NASA Astrophysics Data System (ADS)

    Bingham, Robert

    2009-02-01

    as photon deceleration and acceleration and is the result of a modulational instability. Simulations reported by Trines et al using a photon-in-cell code or wave kinetic code agree extremely well with experimental observation. Ion acceleration is actively studied; for example the papers by Robinson, Macchi, Marita and Tripathi all discuss different types of acceleration mechanisms from direct laser acceleration, Coulombic explosion and double layers. Ion acceleration is an exciting development that may have great promise in oncology. The surprising application is in muon acceleration, demonstrated by Peano et al who show that counterpropagating laser beams with variable frequencies drive a beat structure with variable phase velocity, leading to particle trapping and acceleration with possible application to a future muon collider and neutrino factory. Laser and plasma accelerators remain one of the exciting areas of plasma physics with applications in many areas of science ranging from laser fusion, novel high-brightness radiation sources, particle physics and medicine. The guest editor would like to thank all authors and referees for their invaluable contributions to this special issue.

  10. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  11. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  12. Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study

    PubMed Central

    Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.

    2012-01-01

    Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242

  13. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1992-01-01

    In the area of solar physics, new calculations of the acoustic wave energy fluxes generated in the solar convective zone was performed. The original theory developed was corrected by including a new frequency factor describing temporal variations of the turbulent energy spectrum. We have modified the original Stein code by including this new frequency factor, and tested the code extensively. Another possible source of the mechanical energy generated in the solar convective zone is the excitation of magnetic flux tube waves which can carry energy along the tubes far away from the region. The problem as to how efficiently those waves are generated in the Sun was recently solved. The propagation of nonlinear magnetic tube waves in the solar atmosphere was calculated, and mode coupling, shock formation, and heating of the local medium was studied. The wave trapping problems and evaluation of critical frequencies for wave reflection in the solar atmosphere was studied. It was shown that the role played by Alfven waves in the wind accelerations and the coronal hole heating is dominant. Presently, we are performing calculations of wave energy fluxes generated in late-type dwarf stars and studying physical processes responsible for the heating of stellar chromospheres and coronae. In the area of physics of waves, a new analytical approach for studying linear Alfven waves in smoothly nonuniform media was recently developed. This approach is presently being extended to study the propagation of linear and nonlinear magnetohydrodynamic (MHD) waves in stratified, nonisothermal and solar atmosphere. The Lighthill theory of sound generation to nonisothermal media (with a special temperature distribution) was extended. Energy cascade by nonlinear MHD waves and possible chaos driven by these waves are presently considered.

  14. Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay Receiver Design Issues

    DTIC Science & Technology

    2011-03-01

    222 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 9, SEPTEMBER 2011 2595 Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay... noncoherent reception, channel estima- tion. I. INTRODUCTION IN the two-way relay channel (TWRC), a pair of sourceterminals exchange information...2011 4. TITLE AND SUBTITLE Noncoherent Physical-Layer Network Coding with FSK Modulation:Relay Receiver Design Issues 5a. CONTRACT NUMBER 5b

  15. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body.

    PubMed

    Arif, Muhammad; Kattan, Ahmed

    2015-01-01

    Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects' wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state). Their ages were 27.2 ± 3.3 years and their body mass index (BMI) is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value) and recall (sensitivity) of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients.

  16. Accelerator Physics Working Group Summary

    NASA Astrophysics Data System (ADS)

    Li, D.; Uesugi, T.; Wildnerc, E.

    2010-03-01

    The Accelerator Physics Working Group addressed the worldwide R&D activities performed in support of future neutrino facilities. These studies cover R&D activities for Super Beam, Beta Beam and muon-based Neutrino Factory facilities. Beta Beam activities reported the important progress made, together with the research activity planned for the coming years. Discussion sessions were also organized jointly with other working groups in order to define common ground for the optimization of a future neutrino facility. Lessons learned from already operating neutrino facilities provide key information for the design of any future neutrino facility, and were also discussed in this meeting. Radiation damage, remote handling for equipment maintenance and exchange, and primary proton beam stability and monitoring were among the important subjects presented and discussed. Status reports for each of the facility subsystems were presented: proton drivers, targets, capture systems, and muon cooling and acceleration systems. The preferred scenario for each type of possible future facility was presented, together with the challenges and remaining issues. The baseline specification for the muon-based Neutrino Factory was reviewed and updated where required. This report will emphasize new results and ideas and discuss possible changes in the baseline scenarios of the facilities. A list of possible future steps is proposed that should be followed up at NuFact10.

  17. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebe, A.; Leveling, A.; Lu, T.

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less

  18. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    NASA Astrophysics Data System (ADS)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  19. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  20. Radiation Protection Studies for Medical Particle Accelerators using Fluka Monte Carlo Code.

    PubMed

    Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario

    2017-04-01

    Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. © The Author 2016

  1. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs.

  2. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, A.M.M.; Paulson, C.C.; Peacock, M.A.

    1995-10-01

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G.H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decisionmore » has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  3. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.

    1995-09-15

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. Amore » decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  4. Physical Interpretation of the Schott Energy of An Accelerating Point Charge and the Question of Whether a Uniformly Accelerating Charge Radiates

    ERIC Educational Resources Information Center

    Rowland, David R.

    2010-01-01

    A core topic in graduate courses in electrodynamics is the description of radiation from an accelerated charge and the associated radiation reaction. However, contemporary papers still express a diversity of views on the question of whether or not a uniformly accelerating charge radiates suggesting that a complete "physical" understanding of the…

  5. Implementation of an accelerated physical examination course in a doctor of pharmacy program.

    PubMed

    Ho, Jackie; Bidwal, Monica K; Lopes, Ingrid C; Shah, Bijal M; Ip, Eric J

    2014-12-15

    To describe the implementation of a 1-day accelerated physical examination course for a doctor of pharmacy program and to evaluate pharmacy students' knowledge, attitudes, and confidence in performing physical examination. Using a flipped teaching approach, course coordinators collaborated with a physician faculty member to design and develop the objectives of the course. Knowledge, attitude, and confidence survey questions were administered before and after the practical laboratory. Following the practical laboratory, knowledge improved by 8.3% (p<0.0001). Students' perceived ability and confidence to perform a physical examination significantly improved (p<0.0001). A majority of students responded that reviewing the training video (81.3%) and reading material (67.4%) prior to the practical laboratory was helpful in learning the physical examination. An accelerated physical examination course using a flipped teaching approach was successful in improving students' knowledge of, attitudes about, and confidence in using physical examination skills in pharmacy practice.

  6. Implementation of an Accelerated Physical Examination Course in a Doctor of Pharmacy Program

    PubMed Central

    Ho, Jackie; Lopes, Ingrid C.; Shah, Bijal M.; Ip, Eric J.

    2014-01-01

    Objective. To describe the implementation of a 1-day accelerated physical examination course for a doctor of pharmacy program and to evaluate pharmacy students’ knowledge, attitudes, and confidence in performing physical examination. Design. Using a flipped teaching approach, course coordinators collaborated with a physician faculty member to design and develop the objectives of the course. Knowledge, attitude, and confidence survey questions were administered before and after the practical laboratory. Assessment. Following the practical laboratory, knowledge improved by 8.3% (p<0.0001). Students’ perceived ability and confidence to perform a physical examination significantly improved (p<0.0001). A majority of students responded that reviewing the training video (81.3%) and reading material (67.4%) prior to the practical laboratory was helpful in learning the physical examination. Conclusion. An accelerated physical examination course using a flipped teaching approach was successful in improving students’ knowledge of, attitudes about, and confidence in using physical examination skills in pharmacy practice. PMID:25657369

  7. A physical process of the radial acceleration of disc galaxies

    NASA Astrophysics Data System (ADS)

    Wilhelm, Klaus; Dwivedi, Bhola N.

    2018-03-01

    An impact model of gravity designed to emulate Newton's law of gravitation is applied to the radial acceleration of disc galaxies. Based on this model (Wilhelm et al. 2013), the rotation velocity curves can be understood without the need to postulate any dark matter contribution. The increased acceleration in the plane of the disc is a consequence of multiple interactions of gravitons (called `quadrupoles' in the original paper) and the subsequent propagation in this plane and not in three-dimensional space. The concept provides a physical process that relates the fit parameter of the acceleration scale defined by McGaugh et al. (2016) to the mean free path length of gravitons in the discs of galaxies. It may also explain the gravitational interaction at low acceleration levels in MOdification of the Newtonian Dynamics (MOND, Milgrom 1983, 1994, 2015, 2016). Three examples are discussed in some detail: the spiral galaxies NGC 7814, NGC 6503 and M 33.

  8. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  9. Signal of Acceleration and Physical Mechanism of Water Cycle in Xinjiang, China

    PubMed Central

    Feng, Guo-Lin; Wu, Yong-Ping

    2016-01-01

    Global warming accelerates water cycle with features of regional difference. However, little is known about the physical mechanism behind the phenomenon. To reveal the links between water cycle and climatic environment, we analyzed the changes of water cycle elements and their relationships with climatic and environmental factors. We found that when global warming was significant during the period of 1986-2003, the precipitation in Tarim mountains as well as Xinjiang increased rapidly except for Tarim plains, which indicated that there existed a signal of acceleration for water cycle in Xinjiang. The speed of water cycle is mainly affected by altitude, latitude, longitude, slope direction, and the most fundamental element is temperature. Moreover, according to Clausius-Kela Bai Lung relation, we found that the climate change induced the increase of temperature and accelerated the local water cycle only for the wet places. Our results provide a possible physical mechanisms of water cycle and thus well link the climate change to water circulation. PMID:27907078

  10. Signal of Acceleration and Physical Mechanism of Water Cycle in Xinjiang, China.

    PubMed

    Feng, Guo-Lin; Wu, Yong-Ping

    2016-01-01

    Global warming accelerates water cycle with features of regional difference. However, little is known about the physical mechanism behind the phenomenon. To reveal the links between water cycle and climatic environment, we analyzed the changes of water cycle elements and their relationships with climatic and environmental factors. We found that when global warming was significant during the period of 1986-2003, the precipitation in Tarim mountains as well as Xinjiang increased rapidly except for Tarim plains, which indicated that there existed a signal of acceleration for water cycle in Xinjiang. The speed of water cycle is mainly affected by altitude, latitude, longitude, slope direction, and the most fundamental element is temperature. Moreover, according to Clausius-Kela Bai Lung relation, we found that the climate change induced the increase of temperature and accelerated the local water cycle only for the wet places. Our results provide a possible physical mechanisms of water cycle and thus well link the climate change to water circulation.

  11. Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System

    NASA Astrophysics Data System (ADS)

    Aizawa, Naoto; Iwasaki, Tomohiko

    2014-06-01

    Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.

  12. Development of a new lattice physics code robin for PWR application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less

  13. Accelerator physics and technology challenges of very high energy hadron colliders

    NASA Astrophysics Data System (ADS)

    Shiltsev, Vladimir D.

    2015-08-01

    High energy hadron colliders have been in the forefront of particle physics for more than three decades. At present, international particle physics community considers several options for a 100 TeV proton-proton collider as a possible post-LHC energy frontier facility. The method of colliding beams has not fully exhausted its potential but has slowed down considerably in its progress. This paper briefly reviews the accelerator physics and technology challenges of the future very high energy colliders and outlines the areas of required research and development towards their technical and financial feasibility.

  14. Accelerator physics and technology challenges of very high energy hadron colliders

    DOE PAGES

    Shiltsev, Vladimir D.

    2015-08-20

    High energy hadron colliders have been in the forefront of particle physics for more than three decades. At present, international particle physics community considers several options for a 100 TeV proton–proton collider as a possible post-LHC energy frontier facility. The method of colliding beams has not fully exhausted its potential but has slowed down considerably in its progress. This article briefly reviews the accelerator physics and technology challenges of the future very high energy colliders and outlines the areas of required research and development towards their technical and financial feasibility.

  15. Enhanced verification test suite for physics simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  16. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  17. Scheduling observational and physical practice: influence on the coding of simple motor sequences.

    PubMed

    Ellenbuerger, Thomas; Boutin, Arnaud; Blandin, Yannick; Shea, Charles H; Panzer, Stefan

    2012-01-01

    The main purpose of the present experiment was to determine the coordinate system used in the development of movement codes when observational and physical practice are scheduled across practice sessions. The task was to reproduce a 1,300-ms spatial-temporal pattern of elbow flexions and extensions. An intermanual transfer paradigm with a retention test and two effector (contralateral limb) transfer tests was used. The mirror effector transfer test required the same pattern of homologous muscle activation and sequence of limb joint angles as that performed or observed during practice, and the non-mirror effector transfer test required the same spatial pattern movements as that performed or observed. The test results following the first acquisition session replicated the findings of Gruetzmacher, Panzer, Blandin, and Shea (2011) . The results following the second acquisition session indicated a strong advantage for participants who received physical practice in both practice sessions or received observational practice followed by physical practice. This advantage was found on both the retention and the mirror transfer tests compared to the non-mirror transfer test. These results demonstrate that codes based in motor coordinates can be developed relatively quickly and effectively for a simple spatial-temporal movement sequence when participants are provided with physical practice or observation followed by physical practice, but physical practice followed by observational practice or observational practice alone limits the development of codes based in motor coordinates.

  18. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  19. Pulsed power accelerator for material physics experiments

    DOE PAGES

    Reisman, D.  B.; Stoltzfus, B.  S.; Stygar, W.  A.; ...

    2015-09-01

    We have developed the design of Thor: a pulsed power accelerator that delivers a precisely shaped current pulse with a peak value as high as 7 MA to a strip-line load. The peak magnetic pressure achieved within a 1-cm-wide load is as high as 100 GPa. Thor is powered by as many as 288 decoupled and transit-time isolated bricks. Each brick consists of a single switch and two capacitors connected electrically in series. The bricks can be individually triggered to achieve a high degree of current pulse tailoring. Because the accelerator is impedance matched throughout, capacitor energy is delivered tomore » the strip-line load with an efficiency as high as 50%. We used an iterative finite element method (FEM), circuit, and magnetohydrodynamic simulations to develop an optimized accelerator design. When powered by 96 bricks, Thor delivers as much as 4.1 MA to a load, and achieves peak magnetic pressures as high as 65 GPa. When powered by 288 bricks, Thor delivers as much as 6.9 MA to a load, and achieves magnetic pressures as high as 170 GPa. We have developed an algebraic calculational procedure that uses the single brick basis function to determine the brick-triggering sequence necessary to generate a highly tailored current pulse time history for shockless loading of samples. Thor will drive a wide variety of magnetically driven shockless ramp compression, shockless flyer plate, shock-ramp, equation of state, material strength, phase transition, and other advanced material physics experiments.« less

  20. Accelerated Physical Stability Testing of Amorphous Dispersions.

    PubMed

    Mehta, Mehak; Suryanarayanan, Raj

    2016-08-01

    The goal was to develop an accelerated physical stability testing method of amorphous dispersions. Water sorption is known to cause plasticization and may accelerate drug crystallization. In an earlier investigation, it was observed that both the increase in mobility and decrease in stability in amorphous dispersions was explained by the "plasticization" effect of water (Mehta et al. Mol. Pharmaceutics 2016, 13 (4), 1339-1346). In this work, the influence of water concentration (up to 1.8% w/w) on the correlation between mobility and crystallization in felodipine dispersions was investigated. With an increase in water content, the α-relaxation time as well as the time for 1% w/w felodipine crystallization decreased. The relaxation times of the systems, obtained with different water concentration, overlapped when the temperature was scaled (Tg/T). The temperature dependencies of the α-relaxation time as well as the crystallization time were unaffected by the water concentration. Thus, the value of the coupling coefficient, up to a water concentration of 1.8% w/w, was approximately constant. Based on these findings, the use of "water sorption" is proposed to build predictive models for crystallization in slow crystallizing dispersions.

  1. Physical-layer network coding in coherent optical OFDM systems.

    PubMed

    Guan, Xun; Chan, Chun-Kit

    2015-04-20

    We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node.

  2. Physics in ;Real Life;: Accelerator-based Research with Undergraduates

    NASA Astrophysics Data System (ADS)

    Klay, J. L.

    All undergraduates in physics and astronomy should have access to significant research experiences. When given the opportunity to tackle challenging open-ended problems outside the classroom, students build their problem-solving skills in ways that better prepare them for the workplace or future research in graduate school. Accelerator-based research on fundamental nuclear and particle physics can provide a myriad of opportunities for undergraduate involvement in hardware and software development as well as ;big data; analysis. The collaborative nature of large experiments exposes students to scientists of every culture and helps them begin to build their professional network even before they graduate. This paper presents an overview of my experiences - the good, the bad, and the ugly - engaging undergraduates in particle and nuclear physics research at the CERN Large Hadron Collider and the Los Alamos Neutron Science Center.

  3. Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection

    NASA Astrophysics Data System (ADS)

    Alejandro Munoz Sepulveda, Patricio

    2017-04-01

    The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.

  4. FERMILAB ACCELERATOR R&D PROGRAM TOWARDS INTENSITY FRONTIER ACCELERATORS : STATUS AND PROGRESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiltsev, Vladimir

    2016-11-15

    The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centrepiece of the US domestic HEP program at Fermilab. Operation, upgrade and development of the accelerators for the near- term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators and present its status and progress. INTENSITY FRONTIER ACCELERATORS

  5. Shielding analyses for repetitive high energy pulsed power accelerators

    NASA Astrophysics Data System (ADS)

    Jow, H. N.; Rao, D. V.

    Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.

  6. A test harness for accelerating physics parameterization advancements into operations

    NASA Astrophysics Data System (ADS)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  7. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  8. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal

  9. Physical Scaffolding Accelerates the Evolution of Robot Behavior.

    PubMed

    Buckingham, David; Bongard, Josh

    2017-01-01

    In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.

  10. Dissemination and support of ARGUS for accelerator applications. Technical progress report, April 24, 1991--January 20, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  11. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  12. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  13. Acceleration of neutrons in a scheme of a tautochronous mathematical pendulum (physical principles)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivlin, Lev A

    We consider the physical principles of neutron acceleration through a multiple synchronous interaction with a gradient rf magnetic field in a scheme of a tautochronous mathematical pendulum. (laser applications and other aspects of quantum electronics)

  14. The physics of sub-critical lattices in accelerator driven hybrid systems: The MUSE experiments in the MASURCA facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauvin, J. P.; Lebrat, J. F.; Soule, R.

    Since 1991, the CEA has studied the physics of hybrid systems, involving a sub-critical reactor coupled with an accelerator. These studies have provided information on the potential of hybrid systems to transmute actinides and, long lived fission products. The potential of such a system remains to be proven, specifically in terms of the physical understanding of the different phenomena involved and their modelling, as well as in terms of experimental validation of coupled systems, sub-critical environment/accelerator. This validation must be achieved through mock-up studies of the sub-critical environments coupled to a source of external neutrons. The MUSE-4 mock-up experiment ismore » planed at the MASURCA facility and will use an accelerator coupled to a tritium target. The great step between the generator used in the past and the accelerator will allow to increase the knowledge in hybrid physic and to decrease the experimental biases and the measurement uncertainties.« less

  15. Using the FLUKA Monte Carlo Code to Simulate the Interactions of Ionizing Radiation with Matter to Assist and Aid Our Understanding of Ground Based Accelerator Testing, Space Hardware Design, and Secondary Space Radiation Environments

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2015-01-01

    Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.

  16. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY

  17. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; Awe, T. J.; Bailey, J. E.; Bennett, N. L.; Breden, E. W.; Campbell, E. M.; Clark, R. E.; Cooper, R. A.; Cuneo, M. E.; Ennis, J. B.; Fehl, D. L.; Genoni, T. C.; Gomez, M. R.; Greiser, G. W.; Gruner, F. R.; Herrmann, M. C.; Hutsel, B. T.; Jennings, C. A.; Jobe, D. O.; Jones, B. M.; Jones, M. C.; Jones, P. A.; Knapp, P. F.; Lash, J. S.; LeChien, K. R.; Leckbee, J. J.; Leeper, R. J.; Lewis, S. A.; Long, F. W.; Lucero, D. J.; Madrid, E. A.; Martin, M. R.; Matzen, M. K.; Mazarakis, M. G.; McBride, R. D.; McKee, G. R.; Miller, C. L.; Moore, J. K.; Mostrom, C. B.; Mulville, T. D.; Peterson, K. J.; Porter, J. L.; Reisman, D. B.; Rochau, G. A.; Rochau, G. E.; Rose, D. V.; Rovang, D. C.; Savage, M. E.; Sceiford, M. E.; Schmit, P. F.; Schneider, R. F.; Schwarz, J.; Sefkow, A. B.; Sinars, D. B.; Slutz, S. A.; Spielman, R. B.; Stoltzfus, B. S.; Thoma, C.; Vesey, R. A.; Wakeland, P. E.; Welch, D. R.; Wisher, M. L.; Woodworth, J. R.

    2015-11-01

    We have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated by the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator's water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator's physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD) simulations

  18. Introductory Physics Experiments Using the Wiimote

    NASA Astrophysics Data System (ADS)

    Somers, William; Rooney, Frank; Ochoa, Romulo

    2009-03-01

    The Wii, a video game console, is a very popular device with millions of units sold worldwide over the past two years. Although computationally it is not a powerful machine, to a physics educator its most important components can be its controllers. The Wiimote (or remote) controller contains three accelerometers, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, any PC with Bluetooth capability can detect the information sent out by the Wiimote. We have designed several experiments for introductory physics courses that make use of the accelerometers and Bluetooth connectivity. We have adapted the Wiimote to measure the: variable acceleration in simple harmonic motion, centripetal and tangential accelerations in circular motion, and the accelerations generated when students lift weights. We present the results of our experiments and compare them with those obtained when using motion and/or force sensors.

  19. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH...

  20. ASP2012: Fundamental Physics and Accelerator Sciences in Africa

    NASA Astrophysics Data System (ADS)

    Darve, Christine

    2012-02-01

    Much remains to be done to improve education and scientific research in Africa. Supported by the international scientific community, our initiative has been to contribute to fostering science in sub-Saharan Africa by establishing a biennial school on fundamental subatomic physics and its applications. The school is based on a close interplay between theoretical, experimental, and applied physics. The lectures are addressed to students or young researchers with at least a background of 4 years of university formation. The aim of the school is to develop capacity, interpret, and capitalize on the results of current and future physics experiments with particle accelerators; thereby spreading education for innovation in related applications and technologies, such as medicine and information science. Following the worldwide success of the first school edition, which gathered 65 students for 3-week in Stellenbosch (South Africa) in August 2010, the second edition will be hosted in Ghana from July 15 to August 4, 2012. The school is a non-profit organization, which provides partial or full financial support to 50 of the selected students, with priority to Sub-Saharan African students.

  1. Physics of the saturation of particle acceleration in relativistic magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Kagan, Daniel; Nakar, Ehud; Piran, Tsvi

    2018-05-01

    We investigate the saturation of particle acceleration in relativistic reconnection using two-dimensional particle-in-cell simulations at various magnetizations σ. We find that the particle energy spectrum produced in reconnection quickly saturates as a hard power law that cuts off at γ ≈ 4σ, confirming previous work. Using particle tracing, we find that particle acceleration by the reconnection electric field in X-points determines the shape of the particle energy spectrum. By analysing the current sheet structure, we show that physical cause of saturation is the spontaneous formation of secondary magnetic islands that can disrupt particle acceleration. By comparing the size of acceleration regions to the typical distance between disruptive islands, we show that the maximum Lorentz factor produced in reconnection is γ ≈ 5σ, which is very close to what we find in our particle energy spectra. We also show that the dynamic range in Lorentz factor of the power-law spectrum in reconnection is ≤40. The hardness of the power law combined with its narrow dynamic range implies that relativistic reconnection is capable of producing the hard narrow-band flares observed in the Crab nebula but has difficulty producing the softer broad-band prompt gamma-ray burst emission.

  2. Study of no-man's land physics in the total-f gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung Hoe; Chang, C. S.; Lang, J.

    2014-10-01

    While the ``transport shortfall'' in the ``no-man's land'' has been observed often in delta-f codes, it has not yet been observed in the global total-f gyrokinetic particle code XGC1. Since understanding the interaction between the edge and core transport appears to be a critical element in the prediction for ITER performance, understanding the no-man's land issue is an important physics research topic. Simulation results using the Holland case will be presented and the physics causing the shortfall phenomenon will be discussed. Nonlinear nonlocal interaction of turbulence, secondary flows, and transport appears to be the key.

  3. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  4. Physics behind the mechanical nucleosome positioning code

    NASA Astrophysics Data System (ADS)

    Zuiddam, Martijn; Everaers, Ralf; Schiessel, Helmut

    2017-11-01

    The positions along DNA molecules of nucleosomes, the most abundant DNA-protein complexes in cells, are influenced by the sequence-dependent DNA mechanics and geometry. This leads to the "nucleosome positioning code", a preference of nucleosomes for certain sequence motives. Here we introduce a simplified model of the nucleosome where a coarse-grained DNA molecule is frozen into an idealized superhelical shape. We calculate the exact sequence preferences of our nucleosome model and find it to reproduce qualitatively all the main features known to influence nucleosome positions. Moreover, using well-controlled approximations to this model allows us to come to a detailed understanding of the physics behind the sequence preferences of nucleosomes.

  5. A Model of RHIC Using the Unified Accelerator Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat, F.; Tepikian, S.; Trahern, C. G.

    1998-01-01

    The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homo­geneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lat­tice description while assigning specificmore » characteristics to individual elements, such as measured field har­monics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator phys­ics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC spe­cific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less

  6. Proton and Ion Acceleration using Multi-kJ Lasers

    NASA Astrophysics Data System (ADS)

    Wilks, S. C.; Ma, T.; Kemp, A. J.; Tabak, M.; Link, A. J.; Haefner, C.; Hermann, M. R.; Mariscal, D. A.; Rubenchik, S.; Sterne, P.; Kim, J.; McGuffey, C.; Bhutwala, K.; Beg, F.; Wei, M.; Kerr, S. M.; Sentoku, Y.; Iwata, N.; Norreys, P.; Sevin, A.

    2017-10-01

    Short (<50 ps) laser pulses are capable of accelerating protons and ions from solid (or dense gas jet) targets as demonstrated by a number of laser facilities around the world in the past 20 years accelerating protons to between 1 and 100 MeV, depending on specific laser parameters. Over this time, a distinct scaling with energy has emerged that shows a trend towards increasing maximum accelerated proton (ion) energy with increasing laser energy. We consider the physical basis underlying this scaling, and use this to estimate future results when multi-kJ laser systems begin operating in this new high energy regime. In particular, we consider the effects of laser prepulse, intensity, energy, and pulse length on the number and energy of the ions, as well as target size and composition. We also discuss potential uses of these ion beams in High Energy Density Physics Experiments. This work was performed under the auspices of the U.S. Department of Energy (DOE) by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  7. Physical condition for the slowing down of cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Zhang, Ming-Jian; Xia, Jun-Qing

    2018-04-01

    The possible slowing down of cosmic acceleration was widely studied. However, judgment on this effect in different dark energy parameterizations was very ambiguous. Moreover, the reason of generating these uncertainties was still unknown. In the present paper, we analyze the derivative of deceleration parameter q‧ (z) using the Gaussian processes. This model-independent reconstruction suggests that no slowing down of acceleration is presented within 95% C.L. from the Union2.1 and JLA supernova data. However, q‧ (z) from the observational H (z) data is a little smaller than zero at 95% C.L., which indicates that future H (z) data may have a potential to test this effect. From the evolution of q‧ (z), we present an interesting constraint on the dark energy and observational data. The physical constraint clearly solves the problem of why some dark energy models cannot produce this effect in previous work. Comparison between the constraint and observational data also shows that most of current data are not in the allowed regions. This implies a reason of why current data cannot convincingly measure this effect.

  8. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  9. New methods in WARP, a particle-in-cell code for space-charge dominated beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D., LLNL

    1998-01-12

    The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less

  10. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  11. Physical-Layer Network Coding for VPN in TDM-PON

    NASA Astrophysics Data System (ADS)

    Wang, Qike; Tse, Kam-Hon; Chen, Lian-Kuan; Liew, Soung-Chang

    2012-12-01

    We experimentally demonstrate a novel optical physical-layer network coding (PNC) scheme over time-division multiplexing (TDM) passive optical network (PON). Full-duplex error-free communications between optical network units (ONUs) at 2.5 Gb/s are shown for all-optical virtual private network (VPN) applications. Compared to the conventional half-duplex communications set-up, our scheme can increase the capacity by 100% with power penalty smaller than 3 dB. Synchronization of two ONUs is not required for the proposed VPN scheme

  12. Pyroelectric Crystal Accelerator In The Department Of Physics And Nuclear Engineering At West Point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillich, Don; Kovanen, Andrew; Anderson, Tom

    The Nuclear Science and Engineering Research Center (NSERC), a Defense Threat Reduction Agency (DTRA) office located at the United States Military Academy (USMA), sponsors and manages cadet and faculty research in support of DTRA objectives. The NSERC has created an experimental pyroelectric crystal accelerator program to enhance undergraduate education at USMA in the Department of Physics and Nuclear Engineering. This program provides cadets with hands-on experience in designing their own experiments using an inexpensive tabletop accelerator. This device uses pyroelectric crystals to ionize and accelerate gas ions to energies of {approx}100 keV. Within the next year, cadets and faculty atmore » USMA will use this device to create neutrons through the deuterium-deuterium (D-D) fusion process, effectively creating a compact, portable neutron generator. The double crystal pyroelectric accelerator will also be used by students to investigate neutron, x-ray, and ion spectroscopy.« less

  13. Acceleration methods for multi-physics compressible flow

    NASA Astrophysics Data System (ADS)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation

  14. The r-Java 2.0 code: nuclear physics

    NASA Astrophysics Data System (ADS)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  15. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  16. First muon acceleration using a radio-frequency accelerator

    NASA Astrophysics Data System (ADS)

    Bae, S.; Choi, H.; Choi, S.; Fukao, Y.; Futatsukawa, K.; Hasegawa, K.; Iijima, T.; Iinuma, H.; Ishida, K.; Kawamura, N.; Kim, B.; Kitamura, R.; Ko, H. S.; Kondo, Y.; Li, S.; Mibe, T.; Miyake, Y.; Morishita, T.; Nakazawa, Y.; Otani, M.; Razuvaev, G. P.; Saito, N.; Shimomura, K.; Sue, Y.; Won, E.; Yamazaki, T.

    2018-05-01

    Muons have been accelerated by using a radio-frequency accelerator for the first time. Negative muonium atoms (Mu- ), which are bound states of positive muons (μ+) and two electrons, are generated from μ+'s through the electron capture process in an aluminum degrader. The generated Mu- 's are initially electrostatically accelerated and injected into a radio-frequency quadrupole linac (RFQ). In the RFQ, the Mu- 's are accelerated to 89 keV. The accelerated Mu- 's are identified by momentum measurement and time of flight. This compact muon linac opens the door to various muon accelerator applications including particle physics measurements and the construction of a transmission muon microscope.

  17. IOTA (Integrable Optics Test Accelerator): Facility and experimental beam physics program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antipov, Sergei; Broemmelsiek, Daniel; Bruhwiler, David

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning andmore » research. Finally, the physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.« less

  18. IOTA (Integrable Optics Test Accelerator): Facility and experimental beam physics program

    DOE PAGES

    Antipov, Sergei; Broemmelsiek, Daniel; Bruhwiler, David; ...

    2017-03-06

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning andmore » research. Finally, the physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.« less

  19. IOTA (Integrable Optics Test Accelerator): facility and experimental beam physics program

    NASA Astrophysics Data System (ADS)

    Antipov, S.; Broemmelsiek, D.; Bruhwiler, D.; Edstrom, D.; Harms, E.; Lebedev, V.; Leibfritz, J.; Nagaitsev, S.; Park, C. S.; Piekarz, H.; Piot, P.; Prebys, E.; Romanov, A.; Ruan, J.; Sen, T.; Stancari, G.; Thangaraj, C.; Thurman-Keup, R.; Valishev, A.; Shiltsev, V.

    2017-03-01

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. The physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.

  20. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  1. Conceptual design of a pulsed-power accelerator optimized for megajoule-class 1-TPa dynamic-material-physics experiments

    DOE PAGES

    Stygar, William A.; Reisman, David B.; Stoltzfus, Brian S.; ...

    2016-07-07

    In this study, we have developed a conceptual design of a next-generation pulsed-power accelerator that is optmized for driving megajoule-class dynamic-material-physics experiments at pressures as high as 1 TPa. The design is based on an accelerator architecture that is founded on three concepts: single-stage electrical-pulse compression, impedance matching, and transit-time-isolated drive circuits. Since much of the accelerator is water insulated, we refer to this machine as Neptune. The prime power source of Neptune consists of 600 independent impedance-matched Marx generators. As much as 0.8 MJ and 20 MA can be delivered in a 300-ns pulse to a 16-mΩ physics load;more » hence Neptune is a megajoule-class 20-MA arbitrary waveform generator. Neptune will allow the international scientific community to conduct dynamic equation-of-state, phase-transition, mechanical-property, and other material-physics experiments with a wide variety of well-defined drive-pressure time histories. Because Neptune can deliver on the order of a megajoule to a load, such experiments can be conducted on centimeter-scale samples at terapascal pressures with time histories as long as 1 μs.« less

  2. Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs

    NASA Astrophysics Data System (ADS)

    Chhotray, Atul; Lazzati, Davide

    2018-05-01

    We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.

  3. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stygar, W. A.; Awe, T. J.; Bennett, N L

    Here, we have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated bymore » the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator’s water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator’s physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD

  4. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    DOE PAGES

    Stygar, W. A.; Awe, T. J.; Bennett, N L; ...

    2015-11-30

    Here, we have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated bymore » the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator’s water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator’s physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD

  5. Physics and Novel Schemes of Laser Radiation Pressure Acceleration for Quasi-monoenergetic Proton Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chuan S.; Shao, Xi

    2016-06-14

    The main objective of our work is to provide theoretical basis and modeling support for the design and experimental setup of compact laser proton accelerator to produce high quality proton beams tunable with energy from 50 to 250 MeV using short pulse sub-petawatt laser. We performed theoretical and computational studies of energy scaling and Raleigh--Taylor instability development in laser radiation pressure acceleration (RPA) and developed novel RPA-based schemes to remedy/suppress instabilities for high-quality quasimonoenergetic proton beam generation as we proposed. During the project period, we published nine peer-reviewed journal papers and made twenty conference presentations including six invited talks onmore » our work. The project supported one graduate student who received his PhD degree in physics in 2013 and supported two post-doctoral associates. We also mentored three high school students and one undergraduate student of physics major by inspiring their interests and having them involved in the project.« less

  6. Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Marquart, Jed E.

    2005-01-01

    The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the

  7. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE PAGES

    Lee, P.; Audet, T. L.; Lehe, R.; ...

    2015-12-31

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  8. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, P.; Audet, T. L.; Lehe, R.

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  9. The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems

    DOE R&D Accomplishments Database

    Nelson, W. R.; Namito, Yoshihito

    1990-03-01

    In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.

  10. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in

  11. Acceleration training for improving physical fitness and weight loss in obese women.

    PubMed

    So, Rina; Eto, Miki; Tsujimoto, Takehiko; Tanaka, Kiyoji

    2014-01-01

    Reducing body weight and visceral adipose tissue (VAT) are the primary goals for maintaining health in obese individuals as compared to those of normal weight, but it is also important to maintain physical fitness for a healthy life after weight-loss. Acceleration training (AT) has recently been indicated as an alternative to resistance training for elite athletes and also as a component of preventive medicine. However, it is unclear whether combining AT with a weight-loss diet will improve physical fitness in obese individuals. The present study aimed to determine the synergistic effects of AT on body composition and physical fitness with weight-loss program in overweight and obese women. Twenty-eight obese, middle-aged women were divided into two groups as follows: diet and aerobic exercise group (DA; BMI: 29.3 ± 3.0 kg/m2); and diet, aerobic exercise and acceleration training group (DAA; BMI: 31.2 ± 4.0 kg/m2). Both groups included a 12-week weight-loss program. Body composition, visceral adipose tissue (VAT) area and physical fitness (hand grip, side-to-side steps, single-leg balance with eyes closed, sit-and-reach and maximal oxygen uptake) were measured before and after the program. Body weight, BMI, waist circumference and VAT area decreased significantly in both groups. Hand grip (2.1 ± 3.0 kg), single-leg balance (11.0 ± 15.4 s) and sit-and-reach (6.5 ± 4.8 cm) improved significantly only in the DAA group. Our findings indicate that combining AT with classical lifestyle modifications is effective at reducing VAT, and it may enhance muscle strength and performance in overweight and obese women. © 2014 Asian Oceanian Association for the Study of Obesity . Published by Elsevier Ltd. All rights reserved.

  12. The physical and chemical stability of anti-tuberculosis fixed-dose combination products under accelerated climatic conditions.

    PubMed

    Bhutani, H; Mariappan, T T; Singh, S

    2004-09-01

    To determine the physical and chemical stability of anti-tuberculosis fixed-dose combinations (FDC) of rifampicin (RMP), isoniazid (INH), pyrazinamide (PZA) and ethambutol (EMB) sold on the Indian market. The products were stored for 3 months under ICH/WHO accelerated conditions (40 degrees C / 75% RH), with and without the original packaging in the presence and absence of light. The initial RMP, INH and PZA content was found to be within the range of 90-110% of the label claim. However, the products were found to have some chemical instability even initially; one of the tablets also showed physical instability. Under accelerated conditions, the unpackaged products underwent severe changes, whereas both physical and chemical changes were also observed in the packaged formulations. The physical changes were stronger under lighted conditions. A significant finding is that PZA and perhaps EMB may play a catalytic role in the interaction between INH and RMP. This study suggests that, unless they are packed in barrier packaging, anti-tuberculosis FDC formulations should be considered unstable, and due consideration should be given to their development pharmaceutics, packaging and stability testing.

  13. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    NASA Technical Reports Server (NTRS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  14. Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, J.D.; Lau, E.L.; Turyshev, S.G.

    Radio metric data from the Pioneer 10/11, Galileo, and Ulysses spacecraft indicate an apparent anomalous, constant, acceleration acting on the spacecraft with a magnitude {approximately}8.5{times}10{sup {minus}8} cm/s{sup 2} , directed towards the Sun. Two independent codes and physical strategies have been used to analyze the data. A number of potential causes have been ruled out. We discuss future kinematic tests and possible origins of the signal. {copyright} {ital 1998} {ital The American Physical Society}

  15. Using a 400 kV Van de Graaff accelerator to teach physics at West Point

    NASA Astrophysics Data System (ADS)

    Marble, D. K.; Bruch, S. E.; Lainis, T.

    1997-02-01

    A small accelerator visitation laboratory is being built at the United States Military Academy using two 400 kV Van de Graaff accelerators. This laboratory will provide quality teaching experiments and increased research opportunities for both faculty and cadets as well as enhancing the department's ability to teach across the curriculum by using nuclear techniques to solve problems in environmental engineering, material science, archeology, art, etc. This training enhances a students ability to enter non-traditional fields that are becoming a large part of the physics job market. Furthermore, a small accelerator visitation laboratory for high school students can stimulate student interest in science and provide an effective means of communicating the scientific method to a general audience. A discussion of the USMA facility, class experiments and student research projects will be presented.

  16. Frontier applications of electrostatic accelerators

    NASA Astrophysics Data System (ADS)

    Liu, Ke-Xin; Wang, Yu-Gang; Fan, Tie-Shuan; Zhang, Guo-Hui; Chen, Jia-Er

    2013-10-01

    Electrostatic accelerator is a powerful tool in many research fields, such as nuclear physics, radiation biology, material science, archaeology and earth sciences. Two electrostatic accelerators, one is the single stage Van de Graaff with terminal voltage of 4.5 MV and another one is the EN tandem with terminal voltage of 6 MV, were installed in 1980s and had been put into operation since the early 1990s at the Institute of Heavy Ion Physics. Many applications have been carried out since then. These two accelerators are described and summaries of the most important applications on neutron physics and technology, radiation biology and material science, as well as accelerator mass spectrometry (AMS) are presented.

  17. Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.

    2010-12-01

    The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.

  18. NESSY: NLTE spectral synthesis code for solar and stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.

    2017-07-01

    Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.

  19. Shielding calculations for industrial 5/7.5MeV electron accelerators using the MCNP Monte Carlo Code

    NASA Astrophysics Data System (ADS)

    Peri, Eyal; Orion, Itzhak

    2017-09-01

    High energy X-rays from accelerators are used to irradiate food ingredients to prevent growth and development of unwanted biological organisms in food, and by that extend the shelf life of the products. The production of X-rays is done by accelerating 5 MeV electrons and bombarding them into a heavy target (high Z). Since 2004, the FDA has approved using 7.5 MeV energy, providing higher production rates with lower treatments costs. In this study we calculated all the essential data needed for a straightforward concrete shielding design of typical food accelerator rooms. The following evaluation is done using the MCNP Monte Carlo code system: (1) Angular dependence (0-180°) of photon dose rate for 5 MeV and 7.5 MeV electron beams bombarding iron, aluminum, gold, tantalum, and tungsten targets. (2) Angular dependence (0-180°) spectral distribution simulations of bremsstrahlung for gold, tantalum, and tungsten bombarded by 5 MeV and 7.5 MeV electron beams. (3) Concrete attenuation calculations in several photon emission angles for the 5 MeV and 7.5 MeV electron beams bombarding a tantalum target. Based on the simulation, we calculated the expected increase in dose rate for facilities intending to increase the energy from 5 MeV to 7.5 MeV, and the concrete width needed to be added in order to keep the existing dose rate unchanged.

  20. Comparison of accelerator physics issues for symmetric and asymmetric B-factory rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tigner, M.

    1990-10-10

    A systematic comparison of accelerator physics issues from the beam-beam interaction to single particle stability including ring and IR layout, synchrotron radiation and lost particle backgrounds, and single and multi-bunch instabilities is given. While some practical handicap probably accrues to the asymmetric design because of its extra constraints, the differences in the two approaches tend to be obscured by larger issues such as how to achieve the enormous increases in luminosity demanded of a b-factory.

  1. Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-10-01

    Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessmentmore » of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to; Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7; Windows

  2. Separating Movement and Gravity Components in an Acceleration Signal and Implications for the Assessment of Human Daily Physical Activity

    PubMed Central

    van Hees, Vincent T.; Gorzelniak, Lukas; Dean León, Emmanuel Carlos; Eder, Martin; Pias, Marcelo; Taherian, Salman; Ekelund, Ulf; Renström, Frida; Franks, Paul W.; Horsch, Alexander; Brage, Søren

    2013-01-01

    Introduction Human body acceleration is often used as an indicator of daily physical activity in epidemiological research. Raw acceleration signals contain three basic components: movement, gravity, and noise. Separation of these becomes increasingly difficult during rotational movements. We aimed to evaluate five different methods (metrics) of processing acceleration signals on their ability to remove the gravitational component of acceleration during standardised mechanical movements and the implications for human daily physical activity assessment. Methods An industrial robot rotated accelerometers in the vertical plane. Radius, frequency, and angular range of motion were systematically varied. Three metrics (Euclidian norm minus one [ENMO], Euclidian norm of the high-pass filtered signals [HFEN], and HFEN plus Euclidean norm of low-pass filtered signals minus 1 g [HFEN+]) were derived for each experimental condition and compared against the reference acceleration (forward kinematics) of the robot arm. We then compared metrics derived from human acceleration signals from the wrist and hip in 97 adults (22–65 yr), and wrist in 63 women (20–35 yr) in whom daily activity-related energy expenditure (PAEE) was available. Results In the robot experiment, HFEN+ had lowest error during (vertical plane) rotations at an oscillating frequency higher than the filter cut-off frequency while for lower frequencies ENMO performed better. In the human experiments, metrics HFEN and ENMO on hip were most discrepant (within- and between-individual explained variance of 0.90 and 0.46, respectively). ENMO, HFEN and HFEN+ explained 34%, 30% and 36% of the variance in daily PAEE, respectively, compared to 26% for a metric which did not attempt to remove the gravitational component (metric EN). Conclusion In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice of metric

  3. Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physical activity.

    PubMed

    van Hees, Vincent T; Gorzelniak, Lukas; Dean León, Emmanuel Carlos; Eder, Martin; Pias, Marcelo; Taherian, Salman; Ekelund, Ulf; Renström, Frida; Franks, Paul W; Horsch, Alexander; Brage, Søren

    2013-01-01

    Human body acceleration is often used as an indicator of daily physical activity in epidemiological research. Raw acceleration signals contain three basic components: movement, gravity, and noise. Separation of these becomes increasingly difficult during rotational movements. We aimed to evaluate five different methods (metrics) of processing acceleration signals on their ability to remove the gravitational component of acceleration during standardised mechanical movements and the implications for human daily physical activity assessment. An industrial robot rotated accelerometers in the vertical plane. Radius, frequency, and angular range of motion were systematically varied. Three metrics (Euclidian norm minus one [ENMO], Euclidian norm of the high-pass filtered signals [HFEN], and HFEN plus Euclidean norm of low-pass filtered signals minus 1 g [HFEN+]) were derived for each experimental condition and compared against the reference acceleration (forward kinematics) of the robot arm. We then compared metrics derived from human acceleration signals from the wrist and hip in 97 adults (22-65 yr), and wrist in 63 women (20-35 yr) in whom daily activity-related energy expenditure (PAEE) was available. In the robot experiment, HFEN+ had lowest error during (vertical plane) rotations at an oscillating frequency higher than the filter cut-off frequency while for lower frequencies ENMO performed better. In the human experiments, metrics HFEN and ENMO on hip were most discrepant (within- and between-individual explained variance of 0.90 and 0.46, respectively). ENMO, HFEN and HFEN+ explained 34%, 30% and 36% of the variance in daily PAEE, respectively, compared to 26% for a metric which did not attempt to remove the gravitational component (metric EN). In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice of metric explains different degrees of variance in

  4. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  5. Fluid Physics Under a Stochastic Acceleration Field

    NASA Technical Reports Server (NTRS)

    Vinals, Jorge

    2001-01-01

    The research summarized in this report has involved a combined theoretical and computational study of fluid flow that results from the random acceleration environment present onboard space orbiters, also known as g-jitter. We have focused on a statistical description of the observed g-jitter, on the flows that such an acceleration field can induce in a number of experimental configurations of interest, and on extending previously developed methodology to boundary layer flows. Narrow band noise has been shown to describe many of the features of acceleration data collected during space missions. The scale of baroclinically induced flows when the driving acceleration is random is not given by the Rayleigh number. Spatially uniform g-jitter induces additional hydrodynamic forces among suspended particles in incompressible fluids. Stochastic modulation of the control parameter shifts the location of the onset of an oscillatory instability. Random vibration of solid boundaries leads to separation of boundary layers. Steady streaming ahead of a modulated solid-melt interface enhances solute transport, and modifies the stability boundaries of a planar front.

  6. Recent Improvements of Particle and Heavy Ion Transport code System: PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Iwamoto, Yosuke; Hashimoto, Shintaro; Ogawa, Tatsuhiko; Furuta, Takuya; Abe, Shin-ichiro; Kai, Takeshi; Matsuda, Norihiro; Okumura, Keisuke; Kai, Tetsuya; Iwase, Hiroshi; Sihver, Lembit

    2017-09-01

    The Particle and Heavy Ion Transport code System, PHITS, has been developed under the collaboration of several research institutes in Japan and Europe. This system can simulate the transport of most particles with energy levels up to 1 TeV (per nucleon for ion) using different nuclear reaction models and data libraries. More than 2,500 registered researchers and technicians have used this system for various applications such as accelerator design, radiation shielding and protection, medical physics, and space- and geo-sciences. This paper summarizes the physics models and functions recently implemented in PHITS, between versions 2.52 and 2.88, especially those related to source generation useful for simulating brachytherapy and internal exposures of radioisotopes.

  7. Conceptual design of a 15-TW pulsed-power accelerator for high-energy-density–physics experiments

    DOE PAGES

    Spielman, R. B.; Froula, D. H.; Brent, G.; ...

    2017-06-21

    We have developed a conceptual design of a 15-TW pulsed-power accelerator based on the linear-transformer-driver (LTD) architecture described by Stygar [W. A. Stygar et al., Phys. Rev. ST Accel. Beams 18, 110401 (2015)]. The driver will allow multiple, high-energy-density experiments per day in a university environment and, at the same time, will enable both fundamental and integrated experiments that are scalable to larger facilities. In this design, many individual energy storage units (bricks), each composed of two capacitors and one switch, directly drive the target load without additional pulse compression. Ten LTD modules in parallel drive the load. Each modulemore » consists of 16 LTD cavities connected in series, where each cavity is powered by 22 bricks connected in parallel. This design stores up to 2.75 MJ and delivers up to 15 TW in 100 ns to the constant-impedance, water-insulated radial transmission lines. The transmission lines in turn deliver a peak current as high as 12.5 MA to the physics load. To maximize its experimental value and flexibility, the accelerator is coupled to a modern, multibeam laser facility (four beams with up to 5 kJ in 10 ns and one beam with up to 2.6 kJ in 100 ps or less) that can provide auxiliary heating of the physics load. The lasers also enable advanced diagnostic techniques such as x-ray Thomson scattering and multiframe and three-dimensional radiography. In conclusion, the coupled accelerator-laser facility will be the first of its kind and be capable of conducting unprecedented high-energy-density-physics experiments.« less

  8. Conceptual design of a 15-TW pulsed-power accelerator for high-energy-density–physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, R. B.; Froula, D. H.; Brent, G.

    We have developed a conceptual design of a 15-TW pulsed-power accelerator based on the linear-transformer-driver (LTD) architecture described by Stygar [W. A. Stygar et al., Phys. Rev. ST Accel. Beams 18, 110401 (2015)]. The driver will allow multiple, high-energy-density experiments per day in a university environment and, at the same time, will enable both fundamental and integrated experiments that are scalable to larger facilities. In this design, many individual energy storage units (bricks), each composed of two capacitors and one switch, directly drive the target load without additional pulse compression. Ten LTD modules in parallel drive the load. Each modulemore » consists of 16 LTD cavities connected in series, where each cavity is powered by 22 bricks connected in parallel. This design stores up to 2.75 MJ and delivers up to 15 TW in 100 ns to the constant-impedance, water-insulated radial transmission lines. The transmission lines in turn deliver a peak current as high as 12.5 MA to the physics load. To maximize its experimental value and flexibility, the accelerator is coupled to a modern, multibeam laser facility (four beams with up to 5 kJ in 10 ns and one beam with up to 2.6 kJ in 100 ps or less) that can provide auxiliary heating of the physics load. The lasers also enable advanced diagnostic techniques such as x-ray Thomson scattering and multiframe and three-dimensional radiography. In conclusion, the coupled accelerator-laser facility will be the first of its kind and be capable of conducting unprecedented high-energy-density-physics experiments.« less

  9. Application of Plasma Waveguides to High Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milchberg, Howard M

    2013-03-30

    The eventual success of laser-plasma based acceleration schemes for high-energy particle physics will require the focusing and stable guiding of short intense laser pulses in reproducible plasma channels. For this goal to be realized, many scientific issues need to be addressed. These issues include an understanding of the basic physics of, and an exploration of various schemes for, plasma channel formation. In addition, the coupling of intense laser pulses to these channels and the stable propagation of pulses in the channels require study. Finally, new theoretical and computational tools need to be developed to aid in the design and analysismore » of experiments and future accelerators. Here we propose a 3-year renewal of our combined theoretical and experimental program on the applications of plasma waveguides to high-energy accelerators. During the past grant period we have made a number of significant advances in the science of laser-plasma based acceleration. We pioneered the development of clustered gases as a new highly efficient medium for plasma channel formation. Our contributions here include theoretical and experimental studies of the physics of cluster ionization, heating, explosion, and channel formation. We have demonstrated for the first time the generation of and guiding in a corrugated plasma waveguide. The fine structure demonstrated in these guides is only possible with cluster jet heating by lasers. The corrugated guide is a slow wave structure operable at arbitrarily high laser intensities, allowing direct laser acceleration, a process we have explored in detail with simulations. The development of these guides opens the possibility of direct laser acceleration, a true miniature analogue of the SLAC RF-based accelerator. Our theoretical studies during this period have also contributed to the further development of the simulation codes, Wake and QuickPIC, which can be used for both laser driven and beam driven plasma based acceleration

  10. Electron-beam dynamics for an advanced flash-radiography accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August Jr.

    2015-06-22

    Beam dynamics issues were assessed for a new linear induction electron accelerator. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Beam physics issues were examined through theoretical analysis and computer simulations, including particle-in cell (PIC) codes. Beam instabilities investigated included beam breakup (BBU), image displacement, diocotron, parametric envelope, ion hose, and the resistive wall instability. Beam corkscrew motion and emittance growth frommore » beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos will result if the same engineering standards and construction details are upheld.« less

  11. Proposal for an Accelerator R&D User Facility at Fermilab's Advanced Superconducting Test Accelerator (ASTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Church, M.; Edwards, H.; Harms, E.

    2013-10-01

    Fermilab is the nation’s particle physics laboratory, supported by the DOE Office of High Energy Physics (OHEP). Fermilab is a world leader in accelerators, with a demonstrated track-record— spanning four decades—of excellence in accelerator science and technology. We describe the significant opportunity to complete, in a highly leveraged manner, a unique accelerator research facility that supports the broad strategic goals in accelerator science and technology within the OHEP. While the US accelerator-based HEP program is oriented toward the Intensity Frontier, which requires modern superconducting linear accelerators and advanced highintensity storage rings, there are no accelerator test facilities that support themore » accelerator science of the Intensity Frontier. Further, nearly all proposed future accelerators for Discovery Science will rely on superconducting radiofrequency (SRF) acceleration, yet there are no dedicated test facilities to study SRF capabilities for beam acceleration and manipulation in prototypic conditions. Finally, there are a wide range of experiments and research programs beyond particle physics that require the unique beam parameters that will only be available at Fermilab’s Advanced Superconducting Test Accelerator (ASTA). To address these needs we submit this proposal for an Accelerator R&D User Facility at ASTA. The ASTA program is based on the capability provided by an SRF linac (which provides electron beams from 50 MeV to nearly 1 GeV) and a small storage ring (with the ability to store either electrons or protons) to enable a broad range of beam-based experiments to study fundamental limitations to beam intensity and to develop transformative approaches to particle-beam generation, acceleration and manipulation which cannot be done elsewhere. It will also establish a unique resource for R&D towards Energy Frontier facilities and a test-bed for SRF accelerators and high brightness beam applications in support of the

  12. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    NASA Astrophysics Data System (ADS)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  13. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  14. The Dresden Felsenkeller shallow-underground accelerator laboratory for nuclear astrophysics - Status and first physics program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilgner, Ch.

    Favored by the low background in underground laboratories, low-background accelerator-based experiments are an important tool to study nuclear reactions involving stable charged particles. This technique has been used for many years with great success at the 0.4 MV LUNA accelerator in the Gran Sasso laboratory in Italy, protected from cosmic rays by 1400 m of rock. However, the nuclear reactions of helium and carbon burning and the neutron source reactions for the astrophysical s-process require higher beam energies than those available at LUNA. Also the study of solar fusion reactions necessitates new data at higher energies. As a result, inmore » the present NuPECC long range plan for nuclear physics in Europe, the installation of one or more higher-energy underground accelerators is strongly recommended. An intercomparison exercise using the same High-Purity Ge detector at several sites has shown that, with a combination of 45 m rock overburden, as can be found in the Felsenkeller underground site in Dresden, and an active veto against the remaining muon flux, in a typical nuclear astrophysics setup a background level can be achieved that is similar to the deep underground scenario as in the Gran- Sasso underground laboratory, for instance. Recently, a muon background study and geodetic measurements were carried out by the REGARD group. It was estimated that the rock overburden at the place of the future ion accelerator is equivalent to 130 m of water. The maximum muon flux measured was 2.5 m{sup -2} sr{sup -1} s{sup -1}, in the direction of the tunnel entrance. Based on this finding, a used 5 MV pelletron tandem accelerator with 250 μA up-charge current and external sputter ion source has been obtained and transported to Dresden. Work on an additional radio-frequency ion source on the high voltage terminal is in progress and far advanced. The installation of the accelerator in the Felsenkeller is expected for the near future. The status of the project and

  15. Fluid Physics in a Fluctuating Acceleration Environment

    NASA Technical Reports Server (NTRS)

    Thomson, J. Ross; Drolet, Francois; Vinals, Jorge

    1996-01-01

    We summarize several aspects of an ongoing investigation of the effects that stochastic residual accelerations (g-jitter) onboard spacecraft can have on experiments conducted in a microgravity environment. The residual acceleration field is modeled as a narrow band noise, characterized by three independent parameters: intensity (g(exp 2)), dominant angular frequency Omega, and characteristic correlation time tau. Realistic values for these parameters are obtained from an analysis of acceleration data corresponding to the SL-J mission, as recorded by the SAMS instruments. We then use the model to address the random motion of a solid particle suspended in an incompressible fluid subjected to such random accelerations. As an extension, the effect of jitter on coarsening of a solid-liquid mixture is briefly discussed, and corrections to diffusion controlled coarsening evaluated. We conclude that jitter will not be significant in the experiment 'Coarsening of solid-liquid mixtures' to be conducted in microgravity. Finally, modifications to the location of onset of instability in systems driven by a random force are discussed by extending the standard reduction to the center manifold to the stochastic case. Results pertaining to time-modulated oscillatory convection are briefly discussed.

  16. Neutrino Physics with Accelerator Driven Subcritical Reactors

    NASA Astrophysics Data System (ADS)

    Ciuffoli, Emilio

    2017-09-01

    Accelerator Driven Subcritical System (ADS) reactors are being developed around the world, to produce energy and, at the same time, to provide an efficient way to dispose of and to recycle nuclear waste. Used nuclear fuel, by itself, cannot sustain a chain reaction; however in ADS reactors the additional neutrons which are required will be supplied by a high-intensity accelerator. This accelerator will produce, as a by-product, a large quantity of {\\bar{ν }}μ via muon Decay At Rest (µDAR). Using liquid scintillators, it will be possible to to measure the CP-violating phase δCP and to look for experimental signs of the presence of sterile neutrinos in the appearance channel, testing the LSND and MiniBooNE anomalies. Even in the first stage of the project, when the beam energy will be lower, it will be possible to produce {\\bar{ν }}e via Isotope Decay At Rest (IsoDAR), which can be used to provide competitive bounds on sterile neutrinos in the disappearance channel. I will consider several experimental setups in which the antineutrinos are created using accelerators that will be constructed as part of the China-ADS program.

  17. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  18. EDITORIAL: Metrological Aspects of Accelerator Technology and High Energy Physics Experiments

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.

    2007-08-01

    The subject of this special feature in Measurement Science and Technology concerns measurement methods, devices and subsystems, both hardware and software aspects, applied in large experiments of high energy physics (HEP) and superconducting RF accelerator technology (SRF). These experiments concern mainly the physics of elementary particles or the building of new machines and detectors. The papers present practical examples of applied solutions in large, contemporary, international research projects such as HERA, LHC, FLASH, XFEL, ILC and others. These machines are unique in their global scale and consist of extremely dedicated apparatus. The apparatus is characterized by very large dimensions, a considerable use of resources and a high level of overall technical complexity. They possess a large number of measurement channels (ranging from thousands to over 100 million), are characterized by fast of processing of measured data and high measurement accuracies, and work in quite adverse environments. The measurement channels cooperate with a large number of different sensors of momenta, energies, trajectories of elementary particles, electron, proton and photon beam profiles, accelerating fields in resonant cavities, and many others. The provision of high quality measurement systems requires the designers to use only the most up-to-date technical solutions, measurement technologies, components and devices. Research work in these demanding fields is a natural birthplace of new measurement methods, new data processing and acquisition algorithms, complex, networked measurement system diagnostics and monitoring. These developments are taking place in both hardware and software layers. The chief intention of this special feature is that the papers represent equally some of the most current metrology research problems in HEP and SRF. The accepted papers have been divided into four topical groups: superconducting cavities (4 papers), low level RF systems (8 papers

  19. The physical properties of accelerated Portland cement for endodontic use.

    PubMed

    Camilleri, J

    2008-02-01

    To investigate the physical properties of a novel accelerated Portland cement. The setting time, compressive strength, pH and solubility of white Portland cement (Lafarge Asland; CEM 1, 52.5 N) and accelerated Portland cement (Proto A) produced by excluding gypsum from the manufacturing process (Aalborg White) and a modified version with 4 : 1 addition of bismuth oxide (Proto B) were evaluated. Proto A set in 8 min. The compressive strength of Proto A was comparable with that of Portland cement at all testing periods (P > 0.05). Additions of bismuth oxide extended the setting time and reduced the compressive strength (P < 0.05). Both cements and storage solution were alkaline. All cements tested increased by >12% of their original weight after immersion in water for 1 day with no further absorption after 28 days. Addition of bismuth oxide increased the water uptake of the novel cement (P < 0.05). The setting time of Portland cement can be reduced by excluding the gypsum during the last stage of the manufacturing process without affecting its other properties. Addition of bismuth oxide affected the properties of the novel cement. Further investigation on the effect that bismuth oxide has on the properties of mineral trioxide aggregate is thus warranted.

  20. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without

  1. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without

  2. Predictive coding accelerates word recognition and learning in the early stages of language development.

    PubMed

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  3. USPAS | U.S. Particle Accelerator School

    Science.gov Websites

    U.S. Particle Accelerator School U.S. Particle Accelerator School U.S. Particle Accelerator School U.S. Particle Accelerator School Education in Beam Physics and Accelerator Technology Home About About University Credits Joint International Accelerator School University-Style Programs Symposium-Style Programs

  4. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  5. Future HEP Accelerators: The US Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha; Shiltsev, Vladimir

    2015-11-02

    Accelerator technology has advanced tremendously since the introduction of accelerators in the 1930s, and particle accelerators have become indispensable instruments in high energy physics (HEP) research to probe Nature at smaller and smaller distances. At present, accelerator facilities can be classified into Energy Frontier colliders that enable direct discoveries and studies of high mass scale particles and Intensity Frontier accelerators for exploration of extremely rare processes, usually at relatively low energies. The near term strategies of the global energy frontier particle physics community are centered on fully exploiting the physics potential of the Large Hadron Collider (LHC) at CERN throughmore » its high-luminosity upgrade (HL-LHC), while the intensity frontier HEP research is focused on studies of neutrinos at the MW-scale beam power accelerator facilities, such as Fermilab Main Injector with the planned PIP-II SRF linac project. A number of next generation accelerator facilities have been proposed and are currently under consideration for the medium- and long-term future programs of accelerator-based HEP research. In this paper, we briefly review the post-LHC energy frontier options, both for lepton and hadron colliders in various regions of the world, as well as possible future intensity frontier accelerator facilities.« less

  6. Estimation of dose delivered to accelerator devices from stripping of 18.5 MeV/n 238U ions using the FLUKA code

    NASA Astrophysics Data System (ADS)

    Oranj, Leila Mokhtari; Lee, Hee-Seock; Leitner, Mario Santana

    2017-12-01

    In Korea, a heavy ion accelerator facility (RAON) has been designed for production of rare isotopes. The 90° bending section of this accelerator includes a 1.3- μm-carbon stripper followed by two dipole magnets and other devices. An incident beam is 18.5 MeV/n 238U33+,34+ ions passing through the carbon stripper at the beginning of the section. The two dipoles are tuned to transport 238U ions with specific charge states of 77+, 78+, 79+, 80+ and 81+. Then other ions will be deflected at the bends and cause beam losses. These beam losses are a concern to the devices of transport/beam line. The absorbed dose in devices and prompt dose in the tunnel were calculated using the FLUKA code in order to estimate radiation damage of such devices located at the 90° bending section and for the radiation protection. A novel method to transport multi-charged 238U ions beam was applied in the FLUKA code by using charge distribution of 238U ions after the stripper obtained from LISE++ code. The calculated results showed that the absorbed dose in the devices is influenced by the geometrical arrangement. The maximum dose was observed at the coils of first, second, fourth and fifth quadruples placed after first dipole magnet. The integrated doses for 30 years of operation with 9.5 p μA 238U ions were about 2 MGy for those quadrupoles. In conclusion, the protection of devices particularly, quadruples would be necessary to reduce the damage to devices. Moreover, results showed that the prompt radiation penetrated within the first 60 - 120 cm of concrete.

  7. Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Kelley, C. T.; Slattery, Stuart R

    ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less

  8. GPU Optimizations for a Production Molecular Docking Code*

    PubMed Central

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  9. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  10. Electron-Beam Dynamics for an Advanced Flash-Radiography Accelerator

    DOE PAGES

    Ekdahl, Carl

    2015-11-17

    Beam dynamics issues were assessed for a new linear induction electron accelerator being designed for multipulse flash radiography of large explosively driven hydrodynamic experiments. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Furthermore, beam physics issues were examined through theoretical analysis and computer simulations, including particle-in-cell codes. Beam instabilities investigated included beam breakup, image displacement, diocotron, parametric envelope, ion hose, and themore » resistive wall instability. The beam corkscrew motion and emittance growth from beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos National Laboratory will result if the same engineering standards and construction details are upheld.« less

  11. Relaunch of the Interactive Plasma Physics Educational Experience (IPPEX)

    NASA Astrophysics Data System (ADS)

    Dominguez, A.; Rusaitis, L.; Zwicker, A.; Stotler, D. P.

    2015-11-01

    In the late 1990's PPPL's Science Education Department developed an innovative online site called the Interactive Plasma Physics Educational Experience (IPPEX). It featured (among other modules) two Java based applications which simulated tokamak physics: A steady state tokamak (SST) and a time dependent tokamak (TDT). The physics underlying the SST and the TDT are based on the ASPECT code which is a global power balance code developed to evaluate the performance of fusion reactor designs. We have relaunched the IPPEX site with updated modules and functionalities: The site itself is now dynamic on all platforms. The graphic design of the site has been modified to current standards. The virtual tokamak programming has been redone in Javascript, taking advantage of the speed and compactness of the code. The GUI of the tokamak has been completely redesigned, including more intuitive representations of changes in the plasma, e.g., particles moving along magnetic field lines. The use of GPU accelerated computation provides accurate and smooth visual representations of the plasma. We will present the current version of IPPEX as well near term plans of incorporating real time NSTX-U data into the simulation.

  12. Physical-layer network coding for passive optical interconnect in datacenter networks.

    PubMed

    Lin, Rui; Cheng, Yuxin; Guan, Xun; Tang, Ming; Liu, Deming; Chan, Chun-Kit; Chen, Jiajia

    2017-07-24

    We introduce physical-layer network coding (PLNC) technique in a passive optical interconnect (POI) architecture for datacenter networks. The implementation of the PLNC in the POI at 2.5 Gb/s and 10Gb/s have been experimentally validated while the gains in terms of network layer performances have been investigated by simulation. The results reveal that in order to realize negligible packet drop, the wavelengths usage can be reduced by half while a significant improvement in packet delay especially under high traffic load can be achieved by employing PLNC over POI.

  13. New features in the design code Tlie

    NASA Astrophysics Data System (ADS)

    van Zeijts, Johannes

    1993-12-01

    We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.

  14. Plasma Wakefield Acceleration and FACET - Facilities for Accelerator Science and Experimental Test Beams at SLAC

    ScienceCinema

    Seryi, Andrei

    2017-12-22

    Plasma wakefield acceleration is one of the most promising approaches to advancing accelerator technology. This approach offers a potential 1,000-fold or more increase in acceleration over a given distance, compared to existing accelerators.  FACET, enabled by the Recovery Act funds, will study plasma acceleration, using short, intense pulses of electrons and positrons. In this lecture, the physics of plasma acceleration and features of FACET will be presented.  

  15. Two-dimensional spatiotemporal coding of linear acceleration in vestibular nuclei neurons

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Bush, G. A.; Perachio, A. A.

    1993-01-01

    Response properties of vertical (VC) and horizontal (HC) canal/otolith-convergent vestibular nuclei neurons were studied in decerebrate rats during stimulation with sinusoidal linear accelerations (0.2-1.4 Hz) along different directions in the head horizontal plane. A novel characteristic of the majority of tested neurons was the nonzero response often elicited during stimulation along the "null" direction (i.e., the direction perpendicular to the maximum sensitivity vector, Smax). The tuning ratio (Smin gain/Smax gain), a measure of the two-dimensional spatial sensitivity, depended on stimulus frequency. For most vestibular nuclei neurons, the tuning ratio was small at the lowest stimulus frequencies and progressively increased with frequency. Specifically, HC neurons were characterized by a flat Smax gain and an approximately 10-fold increase of Smin gain per frequency decade. Thus, these neurons encode linear acceleration when stimulated along their maximum sensitivity direction, and the rate of change of linear acceleration (jerk) when stimulated along their minimum sensitivity direction. While the Smax vectors were distributed throughout the horizontal plane, the Smin vectors were concentrated mainly ipsilaterally with respect to head acceleration and clustered around the naso-occipital head axis. The properties of VC neurons were distinctly different from those of HC cells. The majority of VC cells showed decreasing Smax gains and small, relatively flat, Smin gains as a function of frequency. The Smax vectors were distributed ipsilaterally relative to the induced (apparent) head tilt. In type I anterior or posterior VC neurons, Smax vectors were clustered around the projection of the respective ipsilateral canal plane onto the horizontal head plane. These distinct spatial and temporal properties of HC and VC neurons during linear acceleration are compatible with the spatiotemporal organization of the horizontal and the vertical/torsional ocular responses

  16. A portable platform for accelerated PIC codes and its application to GPUs using OpenACC

    NASA Astrophysics Data System (ADS)

    Hariri, F.; Tran, T. M.; Jocksch, A.; Lanti, E.; Progsch, J.; Messmer, P.; Brunner, S.; Gheller, C.; Villard, L.

    2016-10-01

    We present a portable platform, called PIC_ENGINE, for accelerating Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as Graphic Processing Units (GPUs). The aim of this development is efficient simulations on future exascale systems by allowing different parallelization strategies depending on the application problem and the specific architecture. To this end, this platform contains the basic steps of the PIC algorithm and has been designed as a test bed for different algorithmic options and data structures. Among the architectures that this engine can explore, particular attention is given here to systems equipped with GPUs. The study demonstrates that our portable PIC implementation based on the OpenACC programming model can achieve performance closely matching theoretical predictions. Using the Cray XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the one on an Intel Sandy bridge 8-core CPU by a factor of 3.4.

  17. Doing accelerator physics using SDDS, UNIX, and EPICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.; Emery, L.; Sereno, N.

    1995-12-31

    The use of the SDDS (Self-Describing Data Sets) file protocol, together with the UNIX operating system and EPICS (Experimental Physics and Industrial Controls System), has proved powerful during the commissioning of the APS (Advanced Photon Source) accelerator complex. The SDDS file protocol has permitted a tool-oriented approach to developing applications, wherein generic programs axe written that function as part of multiple applications. While EPICS-specific tools were written for data collection, automated experiment execution, closed-loop control, and so forth, data processing and display axe done with the SDDS Toolkit. Experiments and data reduction axe implemented as UNIX shell scripts that coordinatemore » the execution of EPICS specific tools and SDDS tools. Because of the power and generic nature of the individual tools and of the UNIX shell environment, automated experiments can be prepared and executed rapidly in response to unanticipated needs or new ideas. Examples are given of application of this methodology to beam motion characterization, beam-position-monitor offset measurements, and klystron characterization.« less

  18. Neutron physics with accelerators

    NASA Astrophysics Data System (ADS)

    Colonna, N.; Gunsing, F.; Käppeler, F.

    2018-07-01

    Neutron-induced nuclear reactions are of key importance for a variety of applications in basic and applied science. Apart from nuclear reactors, accelerator-based neutron sources play a major role in experimental studies, especially for the determination of reaction cross sections over a wide energy span from sub-thermal to GeV energies. After an overview of present and upcoming facilities, this article deals with state-of-the-art detectors and equipment, including the often difficult sample problem. These issues are illustrated at selected examples of measurements for nuclear astrophysics and reactor technology with emphasis on their intertwined relations.

  19. Tsallis entropy and complexity theory in the understanding of physics of precursory accelerating seismicity.

    NASA Astrophysics Data System (ADS)

    Vallianatos, Filippos; Chatzopoulos, George

    2014-05-01

    Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds

  20. Proceedings of the 1995 Particle Accelerator Conference and international Conference on High-Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1996-01-01

    Papers from the sixteenth biennial Particle Accelerator Conference, an international forum on accelerator science and technology held May 1–5, 1995, in Dallas, Texas, organized by Los Alamos National Laboratory (LANL) and Stanford Linear Accelerator Center (SLAC), jointly sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Nuclear and Plasma Sciences Society (NPSS), the American Physical Society (APS) Division of Particles and Beams (DPB), and the International Union of Pure and Applied Physics (IUPAP), and conducted with support from the US Department of Energy, the National Science Foundation, and the Office of Naval Research.

  1. Nuclear physics in particle therapy: a review

    NASA Astrophysics Data System (ADS)

    Durante, Marco; Paganetti, Harald

    2016-09-01

    Charged particle therapy has been largely driven and influenced by nuclear physics. The increase in energy deposition density along the ion path in the body allows reducing the dose to normal tissues during radiotherapy compared to photons. Clinical results of particle therapy support the physical rationale for this treatment, but the method remains controversial because of the high cost and of the lack of comparative clinical trials proving the benefit compared to x-rays. Research in applied nuclear physics, including nuclear interactions, dosimetry, image guidance, range verification, novel accelerators and beam delivery technologies, can significantly improve the clinical outcome in particle therapy. Measurements of fragmentation cross-sections, including those for the production of positron-emitting fragments, and attenuation curves are needed for tuning Monte Carlo codes, whose use in clinical environments is rapidly increasing thanks to fast calculation methods. Existing cross sections and codes are indeed not very accurate in the energy and target regions of interest for particle therapy. These measurements are especially urgent for new ions to be used in therapy, such as helium. Furthermore, nuclear physics hardware developments are frequently finding applications in ion therapy due to similar requirements concerning sensors and real-time data processing. In this review we will briefly describe the physics bases, and concentrate on the open issues.

  2. Nuclear physics in particle therapy: a review.

    PubMed

    Durante, Marco; Paganetti, Harald

    2016-09-01

    Charged particle therapy has been largely driven and influenced by nuclear physics. The increase in energy deposition density along the ion path in the body allows reducing the dose to normal tissues during radiotherapy compared to photons. Clinical results of particle therapy support the physical rationale for this treatment, but the method remains controversial because of the high cost and of the lack of comparative clinical trials proving the benefit compared to x-rays. Research in applied nuclear physics, including nuclear interactions, dosimetry, image guidance, range verification, novel accelerators and beam delivery technologies, can significantly improve the clinical outcome in particle therapy. Measurements of fragmentation cross-sections, including those for the production of positron-emitting fragments, and attenuation curves are needed for tuning Monte Carlo codes, whose use in clinical environments is rapidly increasing thanks to fast calculation methods. Existing cross sections and codes are indeed not very accurate in the energy and target regions of interest for particle therapy. These measurements are especially urgent for new ions to be used in therapy, such as helium. Furthermore, nuclear physics hardware developments are frequently finding applications in ion therapy due to similar requirements concerning sensors and real-time data processing. In this review we will briefly describe the physics bases, and concentrate on the open issues.

  3. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    NASA Astrophysics Data System (ADS)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  4. GPU accelerated manifold correction method for spinning compact binaries

    NASA Astrophysics Data System (ADS)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  5. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  6. Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.

    2014-07-01

    Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.

  7. Probing electron acceleration and x-ray emission in laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thaury, C.; Ta Phuoc, K.; Corde, S.

    2013-06-15

    While laser-plasma accelerators have demonstrated a strong potential in the acceleration of electrons up to giga-electronvolt energies, few experimental tools for studying the acceleration physics have been developed. In this paper, we demonstrate a method for probing the acceleration process. A second laser beam, propagating perpendicular to the main beam, is focused on the gas jet few nanosecond before the main beam creates the accelerating plasma wave. This second beam is intense enough to ionize the gas and form a density depletion, which will locally inhibit the acceleration. The position of the density depletion is scanned along the interaction lengthmore » to probe the electron injection and acceleration, and the betatron X-ray emission. To illustrate the potential of the method, the variation of the injection position with the plasma density is studied.« less

  8. Centripetal Acceleration: Often Forgotten or Misinterpreted

    ERIC Educational Resources Information Center

    Singh, Chandralekha

    2009-01-01

    Acceleration is a fundamental concept in physics which is taught in mechanics at all levels. Here, we discuss some challenges in teaching this concept effectively when the path along which the object is moving has a curvature and centripetal acceleration is present. We discuss examples illustrating that both physics teachers and students have…

  9. The history and future of accelerator radiological protection.

    PubMed

    Thomas, R H

    2001-01-01

    The development of accelerator radiological protection from the mid-1930s, just after the invention of the cyclotron, to the present day is described. Three major themes--physics, personalities and politics--are developed. In the sections describing physics the development of shielding design though measurement, radiation transport calculations, the impact of accelerators on the environment and dosimetry in accelerator radiation fields are described. The discussion is limited to high-energy, high-intensity electron and proton accelerators. The impact of notable personalities on the development of both the basic science and on the accelerator health physics profession itself is described. The important role played by scholars and teachers is discussed. In the final section. which discusses the future of accelerator radiological protection, some emphasis is given to the social and political aspects that must he faced in the years ahead.

  10. Breaking the Code: The Creative Use of QR Codes to Market Extension Events

    ERIC Educational Resources Information Center

    Hill, Paul; Mills, Rebecca; Peterson, GaeLynn; Smith, Janet

    2013-01-01

    The use of smartphones has drastically increased in recent years, heralding an explosion in the use of QR codes. The black and white square barcodes that link the physical and digital world are everywhere. These simple codes can provide many opportunities to connect people in the physical world with many of Extension online resources. The…

  11. Accelerator Technology Division annual report, FY 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-06-01

    This paper discusses: accelerator physics and special projects; experiments and injectors; magnetic optics and beam diagnostics; accelerator design and engineering; radio-frequency technology; accelerator theory and simulation; free-electron laser technology; accelerator controls and automation; and high power microwave sources and effects.

  12. The revised APTA code of ethics for the physical therapist and standards of ethical conduct for the physical therapist assistant: theory, purpose, process, and significance.

    PubMed

    Swisher, Laura Lee; Hiller, Peggy

    2010-05-01

    In June 2009, the House of Delegates (HOD) of the American Physical Therapy Association (APTA) passed a major revision of the APTA Code of Ethics for physical therapists and the Standards of Ethical Conduct for the Physical Therapist Assistant. The revised documents will be effective July 1, 2010. The purposes of this article are: (1) to provide a historical, professional, and theoretical context for this important revision; (2) to describe the 4-year revision process; (3) to examine major features of the documents; and (4) to discuss the significance of the revisions from the perspective of the maturation of physical therapy as a doctoring profession. PROCESS OF REVISION: The process for revision is delineated within the context of history and the Bylaws of APTA. FORMAT, STRUCTURE, AND CONTENT OF REVISED CORE ETHICS DOCUMENTS: The revised documents represent a significant change in format, level of detail, and scope of application. Previous APTA Codes of Ethics and Standards of Ethical Conduct for the Physical Therapist Assistant have delineated very broad general principles, with specific obligations spelled out in the Ethics and Judicial Committee's Guide for Professional Conduct and Guide for Conduct of the Physical Therapist Assistant. In contrast to the current documents, the revised documents address all 5 roles of the physical therapist, delineate ethical obligations in organizational and business contexts, and align with the tenets of Vision 2020. The significance of this revision is discussed within historical parameters, the implications for physical therapists and physical therapist assistants, the maturation of the profession, societal accountability and moral community, potential regulatory implications, and the inclusive and deliberative process of moral dialogue by which changes were developed, revised, and approved.

  13. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  14. High Gradient Accelerator Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temkin, Richard

    The goal of the MIT program of research on high gradient acceleration is the development of advanced acceleration concepts that lead to a practical and affordable next generation linear collider at the TeV energy level. Other applications, which are more near-term, include accelerators for materials processing; medicine; defense; mining; security; and inspection. The specific goals of the MIT program are: • Pioneering theoretical research on advanced structures for high gradient acceleration, including photonic structures and metamaterial structures; evaluation of the wakefields in these advanced structures • Experimental research to demonstrate the properties of advanced structures both in low-power microwave coldmore » test and high-power, high-gradient test at megawatt power levels • Experimental research on microwave breakdown at high gradient including studies of breakdown phenomena induced by RF electric fields and RF magnetic fields; development of new diagnostics of the breakdown process • Theoretical research on the physics and engineering features of RF vacuum breakdown • Maintaining and improving the Haimson / MIT 17 GHz accelerator, the highest frequency operational accelerator in the world, a unique facility for accelerator research • Providing the Haimson / MIT 17 GHz accelerator facility as a facility for outside users • Active participation in the US DOE program of High Gradient Collaboration, including joint work with SLAC and with Los Alamos National Laboratory; participation of MIT students in research at the national laboratories • Training the next generation of Ph. D. students in the field of accelerator physics.« less

  15. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  16. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  17. Accelerator-based Neutrino Physics at Fermilab

    NASA Astrophysics Data System (ADS)

    Dukes, Edmond

    2008-10-01

    The discovery of neutrino mass has excited great interest in elucidating the properties of neutrinos and their role in nature. Experiments around the world take advantage of solar, atmospheric, reactor, and accelerator sources of neutrinos. Accelerator-based sources are particularly convenient since their parameters can be tuned to optimize the measurement in question. At Fermilab an extensive neutrino program includes the MiniBooNE, SciBooNE, and MINOS experiments. Two major new experiments, MINERvA and NOvA, are being constructed, plans for a high-intensity neutrino source to DUSEL are underway, and an R&D effort towards a large liquid argon detector is being pursued. The NOvA experiment intends to search for electron neutrino appearance using a massive surface detector 811 km from Fermilab. In addition to measuring the last unknown mixing angle, theta(13), NOvA has the possibility of seeing matter-antimatter asymmetries in neutrinos and resolving the ordering of the neutrino mass states.

  18. Establishing the Common Community Physics Package by Transitioning the GFS Physics to a Collaborative Software Framework

    NASA Astrophysics Data System (ADS)

    Xue, L.; Firl, G.; Zhang, M.; Jimenez, P. A.; Gill, D.; Carson, L.; Bernardet, L.; Brown, T.; Dudhia, J.; Nance, L. B.; Stark, D. R.

    2017-12-01

    The Global Model Test Bed (GMTB) has been established to support the evolution of atmospheric physical parameterizations in NCEP global modeling applications. To accelerate the transition to the Next Generation Global Prediction System (NGGPS), a collaborative model development framework known as the Common Community Physics Package (CCPP) is created within the GMTB to facilitate engagement from the broad community on physics experimentation and development. A key component to this Research to Operation (R2O) software framework is the Interoperable Physics Driver (IPD) that hooks the physics parameterizations from one end to the dynamical cores on the other end with minimum implementation effort. To initiate the CCPP, scientists and engineers from the GMTB separated and refactored the GFS physics. This exercise demonstrated the process of creating IPD-compliant code and can serve as an example for other physics schemes to do the same and be considered for inclusion into the CCPP. Further benefits to this process include run-time physics suite configuration and considerably reduced effort for testing modifications to physics suites through GMTB's physics test harness. The implementation will be described and the preliminary results will be presented at the conference.

  19. Measurement of Coriolis Acceleration with a Smartphone

    ERIC Educational Resources Information Center

    Shaku, Asif; Kraft, Jakob

    2016-01-01

    Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern…

  20. Accelerators Beyond The Tevatron?

    NASA Astrophysics Data System (ADS)

    Lach, Joseph

    2010-07-01

    Following the successful operation of the Fermilab superconducting accelerator three new higher energy accelerators were planned. They were the UNK in the Soviet Union, the LHC in Europe, and the SSC in the United States. All were expected to start producing physics about 1995. They did not. Why?.

  1. ACON: a multipurpose production controller for plasma physics codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, C.

    1983-01-01

    ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less

  2. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  3. Detailed Modeling of Physical Processes in Electron Sources for Accelerator Applications

    NASA Astrophysics Data System (ADS)

    Chubenko, Oksana; Afanasev, Andrei

    2017-01-01

    At present, electron sources are essential in a wide range of applications - from common technical use to exploring the nature of matter. Depending on the application requirements, different methods and materials are used to generate electrons. State-of-the-art accelerator applications set a number of often-conflicting requirements for electron sources (e.g., quantum efficiency vs. polarization, current density vs. lifetime, etc). Development of advanced electron sources includes modeling and design of cathodes, material growth, fabrication of cathodes, and cathode testing. The detailed simulation and modeling of physical processes is required in order to shed light on the exact mechanisms of electron emission and to develop new-generation electron sources with optimized efficiency. The purpose of the present work is to study physical processes in advanced electron sources and develop scientific tools, which could be used to predict electron emission from novel nano-structured materials. In particular, the area of interest includes bulk/superlattice gallium arsenide (bulk/SL GaAs) photo-emitters and nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) photo/field-emitters. Work supported by The George Washington University and Euclid TechLabs LLC.

  4. Opportunities for Undergraduate Research in Nuclear Physics

    DOE PAGES

    Hicks, S. F.; Nguyen, T. D.; Jackson, D. T.; ...

    2017-10-26

    University of Dallas (UD) physics majors are offered a variety of undergraduate research opportunities in nuclear physics through an established program at the University of Kentucky Accelerator Laboratory (UKAL). The 7-MV Model CN Van de Graaff accelerator and the neutron production and detection facilities located there are used by UD students to investigate how neutrons scatter from materials that are important in nuclear energy production and for our basic understanding of how neutrons interact with matter. Recent student projects include modeling of the laboratory using the neutron transport code MCNP to investigate the effectiveness of laboratory shielding, testing the long-termmore » gain stability of C 6D 6 liquid scintillation detectors, and deducing neutron elastic and inelastic scattering cross sections for 12C. Finally, results of these student projects are presented that indicate the pit below the scattering area reduces background by as much as 30%; the detectors show no significant gain instabilities; and new insights into existing 12C neutron inelastic scattering cross-section discrepancies near a neutron energy of 6.0 MeV are obtained.« less

  5. Opportunities for Undergraduate Research in Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, S. F.; Nguyen, T. D.; Jackson, D. T.

    University of Dallas (UD) physics majors are offered a variety of undergraduate research opportunities in nuclear physics through an established program at the University of Kentucky Accelerator Laboratory (UKAL). The 7-MV Model CN Van de Graaff accelerator and the neutron production and detection facilities located there are used by UD students to investigate how neutrons scatter from materials that are important in nuclear energy production and for our basic understanding of how neutrons interact with matter. Recent student projects include modeling of the laboratory using the neutron transport code MCNP to investigate the effectiveness of laboratory shielding, testing the long-termmore » gain stability of C 6D 6 liquid scintillation detectors, and deducing neutron elastic and inelastic scattering cross sections for 12C. Finally, results of these student projects are presented that indicate the pit below the scattering area reduces background by as much as 30%; the detectors show no significant gain instabilities; and new insights into existing 12C neutron inelastic scattering cross-section discrepancies near a neutron energy of 6.0 MeV are obtained.« less

  6. Overview of Particle and Heavy Ion Transport Code System PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  7. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  8. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  9. Conceptual design project: Accelerator complex for nuclear physics studies and boron neutron capture therapy application at the Yerevan Physics Institute (YerPhI) Yerevan, Armenia

    NASA Astrophysics Data System (ADS)

    Avagyan, R. H.; Kerobyan, I. A.

    2015-07-01

    The final goal of the proposed project is the creation of a Complex of Accelerator Facilities at the Yerevan Physics Institute (CAF YerPhI) for nuclear physics basic researches, as well as for applied programs including boron neutron capture therapy (BNCT). The CAF will include the following facilities: Cyclotron C70, heavy material (uranium) target/ion source, mass-separator, LINAC1 (0.15-1.5 MeV/u) and LINAC2 (1.5-10 MeV/u). The delivered by C70 proton beams with energy 70 MeV will be used for investigations in the field of basic nuclear physics and with energy 30 MeV for use in applications.

  10. Electron linear accelerator system for natural rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Rimjaem, S.; Kongmon, E.; Rhodes, M. W.; Saisut, J.; Thongbai, C.

    2017-09-01

    Development of an electron accelerator system, beam diagnostic instruments, an irradiation apparatus and electron beam processing methodology for natural rubber vulcanization is underway at the Plasma and Beam Physics Research Facility, Chiang Mai University, Thailand. The project is carried out with the aims to improve the qualities of natural rubber products. The system consists of a DC thermionic electron gun, 5-cell standing-wave radio-frequency (RF) linear accelerator (linac) with side-coupling cavities and an electron beam irradiation apparatus. This system is used to produce electron beams with an adjustable energy between 0.5 and 4 MeV and a pulse current of 10-100 mA at a pulse repetition rate of 20-400 Hz. An average absorbed dose between 160 and 640 Gy is expected to be archived for 4 MeV electron beam when the accelerator is operated at 400 Hz. The research activities focus firstly on assembling of the accelerator system, study on accelerator properties and electron beam dynamic simulations. The resonant frequency of the RF linac in π/2 operating mode is 2996.82 MHz for the operating temperature of 35 °C. The beam dynamic simulations were conducted by using the code ASTRA. Simulation results suggest that electron beams with an average energy of 4.002 MeV can be obtained when the linac accelerating gradient is 41.7 MV/m. The rms transverse beam size and normalized rms transverse emittance at the linac exit are 0.91 mm and 10.48 π mm·mrad, respectively. This information can then be used as the input data for Monte Carlo simulations to estimate the electron beam penetration depth and dose distribution in the natural rubber latex. The study results from this research will be used to define optimal conditions for natural rubber vulcanization with different electron beam energies and doses. This is very useful for development of future practical industrial accelerator units.

  11. EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.

    2014-04-01

    The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.

  12. Acceleration of Semiempirical QM/MM Methods through Message Passage Interface (MPI), Hybrid MPI/Open Multiprocessing, and Self-Consistent Field Accelerator Implementations.

    PubMed

    Ojeda-May, Pedro; Nam, Kwangho

    2017-08-08

    The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).

  13. Beam acceleration through proton radio frequency quadrupole accelerator in BARC

    NASA Astrophysics Data System (ADS)

    Bhagwat, P. V.; Krishnagopal, S.; Mathew, J. V.; Singh, S. K.; Jain, P.; Rao, S. V. L. S.; Pande, M.; Kumar, R.; Roychowdhury, P.; Kelwani, H.; Rama Rao, B. V.; Gupta, S. K.; Agarwal, A.; Kukreti, B. M.; Singh, P.

    2016-05-01

    A 3 MeV proton Radio Frequency Quadrupole (RFQ) accelerator has been designed at the Bhabha Atomic Research Centre, Mumbai, India, for the Low Energy High Intensity Proton Accelerator (LEHIPA) programme. The 352 MHz RFQ is built in 4 segments and in the first phase two segments of the LEHIPA RFQ were commissioned, accelerating a 50 keV, 1 mA pulsed proton beam from the ion source, to an energy of 1.24 MeV. The successful operation of the RFQ gave confidence in the physics understanding and technology development that have been achieved, and indicate that the road forward can now be traversed rather more quickly.

  14. CFD Code Survey for Thrust Chamber Application

    NASA Technical Reports Server (NTRS)

    Gross, Klaus W.

    1990-01-01

    In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.

  15. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac

  16. Direct Laser Acceleration in Laser Wakefield Accelerators

    NASA Astrophysics Data System (ADS)

    Shaw, J. L.; Froula, D. H.; Marsh, K. A.; Joshi, C.; Lemos, N.

    2017-10-01

    The direct laser acceleration (DLA) of electrons in a laser wakefield accelerator (LWFA) has been investigated. We show that when there is a significant overlap between the drive laser and the trapped electrons in a LWFA cavity, the accelerating electrons can gain energy from the DLA mechanism in addition to LWFA. The properties of the electron beams produced in a LWFA, where the electrons are injected by ionization injection, have been investigated using particle-in-cell (PIC) code simulations. Particle tracking was used to demonstrate the presence of DLA in LWFA. Further PIC simulations comparing LWFA with and without DLA show that the presence of DLA can lead to electron beams that have maximum energies that exceed the estimates given by the theory for the ideal blowout regime. The magnitude of the contribution of DLA to the energy gained by the electron was found to be on the order of the LWFA contribution. The presence of DLA in a LWFA can also lead to enhanced betatron oscillation amplitudes and increased divergence in the direction of the laser polarization. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  17. ICPP: Relativistic Plasma Physics with Ultra-Short High-Intensity Laser Pulses

    NASA Astrophysics Data System (ADS)

    Meyer-Ter-Vehn, Juergen

    2000-10-01

    Recent progress in generating ultra-short high-intensity laser pulses has opened a new branch of relativistic plasma physics, which is discussed in this talk in terms of particle-in-cell (PIC) simulations. These pulses create small plasma volumes of high-density plasma with plasma fields above 10^12 V/m and 10^8 Gauss. At intensities beyond 10^18 W/cm^2, now available from table-top systems, they drive relativistic electron currents in self-focussing plasma channels. These currents are close to the Alfven limit and allow to study relativistic current filamentation. A most remarkable feature is the generation of well collimated relativistic electron beams emerging from the channels with energies up to GeV. In dense matter they trigger cascades of gamma-rays, e^+e^- pairs, and a host of nuclear and particle processes. One of the applications may be fast ignition of compressed inertial fusion targets. Above 10^23 W/cm^2, expected to be achieved in the future, solid-density matter becomes relativistically transparent for optical light, and the acceleration of protons to multi-GeV energies is predicted in plasma layers less than 1 mm thick. These results open completely new perspectives for plasma-based accelerator schemes. Three-dimensional PIC simulations turn out to be the superior tool to explore the relativistic plasma kinetics at such intensities. Results obtained with the VLPL code [1] are presented. Different mechanisms of particle acceleration are discussed. Both laser wakefield and direct laser acceleration in plasma channels (by a mechanism similar to inverse free electron lasers) have been identified. The latter describes recent MPQ experimental results. [1] A. Pukhov, J. Plasma Physics 61, 425 - 433 (1999): Three-dimensional electromagnetic relativistic particle-in-cell code VLPL (Virtual Laser Plasma Laboratory).

  18. Empirical evidence for site coefficients in building code provisions

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  19. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  20. Electron Accelerators for Research at the Frontiers of Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartline, Beverly; Grunder, Hermann

    1986-10-01

    Electron accelerators for the frontiers of nuclear physics must provide high duty factor (gte 80) for coincidence measurements; few-hundred-MeV through few-GeV energy for work in the nucleonic, hadronic, and confinement regimes; energy resolution of ~ 10 -4; and high current (gte 100 zA). To fulfill these requirements new machines and upgrades of existing ones are being planned or constructed. Representative microtron-based facilities are the upgrade of MAMI at the University of Mainz (West Germany), the proposed two-stage cascade microtron at the University of Illinois (U.S.A.), and the three-stage Troitsk ``polytron'' (USSR). Representative projects to add pulse stretcher rings to existingmore » linacs are the upgrades at MIT-Bates (U.S.A.) and at NIKHEF-K (Netherlands). Recent advances in superconducting rf technology, especially in cavity design and fabrication, have made large superconducting cw linacs become feasible. Recirculating superconducting cw linacs are under construc« less

  1. 1985 Particle Accelerator Conference: Accelerator Engineering and Technology, 11th, Vancouver, Canada, May 13-16, 1985, Proceedings

    NASA Astrophysics Data System (ADS)

    Strathdee, A.

    1985-10-01

    The topics discussed are related to high-energy accelerators and colliders, particle sources and electrostatic accelerators, controls, instrumentation and feedback, beam dynamics, low- and intermediate-energy circular accelerators and rings, RF and other acceleration systems, beam injection, extraction and transport, operations and safety, linear accelerators, applications of accelerators, radiation sources, superconducting supercolliders, new acceleration techniques, superconducting components, cryogenics, and vacuum. Accelerator and storage ring control systems are considered along with linear and nonlinear orbit theory, transverse and longitudinal instabilities and cures, beam cooling, injection and extraction orbit theory, high current dynamics, general beam dynamics, and medical and radioisotope applications. Attention is given to superconducting RF structures, magnet technology, superconducting magnets, and physics opportunities with relativistic heavy ion accelerators.

  2. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  3. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  4. Illinois Accelerator Research Center

    DOE PAGES

    Kroc, Thomas K.; Cooper, Charlie A.

    2017-10-26

    The Illinois Accelerator Research Center (IARC) hosts a new accelerator development program at Fermi National Accelerator Laboratory. IARC provides access to Fermi's state-of-the-art facilities and technologies for research, development and industrialization of particle accelerator technology. In addition to facilitating access to available existing Fermi infrastructure, the IARC Campus has a dedicated 36,000 ft2 heavy assembly building (HAB) with all the infrastructure needed to develop, commission and operate new accelerators. Connected to the HAB is a 47,000 ft Office, Technology and Engineering (OTE) building, paid for by the state, that has office, meeting, and light technical space. The OTE building, whichmore » contains the Accelerator Physics Center, and nearby Accelerator and Technical divisions provide IARC collaborators with unique access to world class expertise in a wide array of accelerator technologies. Finally, at IARC scientists and engineers from Fermilab and academia work side by side with industrial partners to develop breakthroughs in accelerator science and translate them into applications for the nation's health, wealth and security.« less

  5. Illinois Accelerator Research Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroc, Thomas K.; Cooper, Charlie A.

    The Illinois Accelerator Research Center (IARC) hosts a new accelerator development program at Fermi National Accelerator Laboratory. IARC provides access to Fermi's state-of-the-art facilities and technologies for research, development and industrialization of particle accelerator technology. In addition to facilitating access to available existing Fermi infrastructure, the IARC Campus has a dedicated 36,000 ft2 heavy assembly building (HAB) with all the infrastructure needed to develop, commission and operate new accelerators. Connected to the HAB is a 47,000 ft Office, Technology and Engineering (OTE) building, paid for by the state, that has office, meeting, and light technical space. The OTE building, whichmore » contains the Accelerator Physics Center, and nearby Accelerator and Technical divisions provide IARC collaborators with unique access to world class expertise in a wide array of accelerator technologies. Finally, at IARC scientists and engineers from Fermilab and academia work side by side with industrial partners to develop breakthroughs in accelerator science and translate them into applications for the nation's health, wealth and security.« less

  6. Illinois Accelerator Research Center

    NASA Astrophysics Data System (ADS)

    Kroc, Thomas K.; Cooper, Charlie A.

    The Illinois Accelerator Research Center (IARC) hosts a new accelerator development program at Fermi National Accelerator Laboratory. IARC provides access to Fermi's state-of-the-art facilities and technologies for research, development and industrialization of particle accelerator technology. In addition to facilitating access to available existing Fermi infrastructure, the IARC Campus has a dedicated 36,000 ft2 Heavy Assembly Building (HAB) with all the infrastructure needed to develop, commission and operate new accelerators. Connected to the HAB is a 47,000 ft2 Office, Technology and Engineering (OTE) building, paid for by the state, that has office, meeting, and light technical space. The OTE building, which contains the Accelerator Physics Center, and nearby Accelerator and Technical divisions provide IARC collaborators with unique access to world class expertise in a wide array of accelerator technologies. At IARC scientists and engineers from Fermilab and academia work side by side with industrial partners to develop breakthroughs in accelerator science and translate them into applications for the nation's health, wealth and security.

  7. Stokes versus Basset: comparison of forces governing motion of small bodies with high acceleration

    NASA Astrophysics Data System (ADS)

    Krafcik, A.; Babinec, P.; Frollo, I.

    2018-05-01

    In this paper, the importance of the forces governing the motion of a millimetre-sized sphere in a viscous fluid has been examined. As has been shown previously, for spheres moving with a high initial acceleration, the Basset history force should be used, as well as the commonly used Stokes force. This paper introduces the concept of history forces, which are almost unknown to students despite their interesting mathematical structure and physical meaning, and shows the implementation of simple and efficient numerical methods as a MATLAB code to simulate the motion of a falling sphere. An important application of this code could be, for example, the simulation of microfluidic systems, where the external forces are very large and the relevant timescale is in the order of milliseconds to seconds, and therefore the Basset history force cannot be neglected.

  8. Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code

    NASA Astrophysics Data System (ADS)

    Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.

    2015-08-01

    MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.

  9. Measurement of Coriolis Acceleration with a Smartphone

    NASA Astrophysics Data System (ADS)

    Shakur, Asif; Kraft, Jakob

    2016-05-01

    Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern smartphones come with a raft of built-in sensors, we have a unique opportunity to experimentally determine the Coriolis acceleration conveniently in a pedagogically enlightening environment at modest cost by using student-owned smartphones. Here we employ the gyroscope and accelerometer in a smartphone to verify the dependence of Coriolis acceleration on the angular velocity of a rotatingtrack and the speed of the sliding smartphone.

  10. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  11. Muon Sources for Particle Physics - Accomplishments of the Muon Accelerator Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuffer, D.; Stratakis, D.; Palmer, M.

    The Muon Accelerator Program (MAP) completed a four-year study on the feasibility of muon colliders and on using stored muon beams for neutrinos. That study was broadly successful in its goals, establishing the feasibility of lepton colliders from the 125 GeV Higgs Factory to more than 10 TeV, as well as exploring using a μ storage ring (MSR) for neutrinos, and establishing that MSRs could provide factory-level intensities of νe (ν more » $$\\bar{e}$$) and ν $$\\bar{μ}$$) (ν μ) beams. The key components of the collider and neutrino factory systems were identified. Feasible designs and detailed simulations of all of these components were obtained, including some initial hardware component tests, setting the stage for future implementation where resources are available and clearly associated physics goals become apparent« less

  12. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  13. The High-Luminosity upgrade of the LHC: Physics and Technology Challenges for the Accelerator and the Experiments

    NASA Astrophysics Data System (ADS)

    Schmidt, Burkhard

    2016-04-01

    In the second phase of the LHC physics program, the accelerator will provide an additional integrated luminosity of about 2500/fb over 10 years of operation to the general purpose detectors ATLAS and CMS. This will substantially enlarge the mass reach in the search for new particles and will also greatly extend the potential to study the properties of the Higgs boson discovered at the LHC in 2012. In order to meet the experimental challenges of unprecedented pp luminosity, the experiments will need to address the aging of the present detectors and to improve the ability to isolate and precisely measure the products of the most interesting collisions. The lectures gave an overview of the physics motivation and described the conceptual designs and the expected performance of the upgrades of the four major experiments, ALICE, ATLAS, CMS and LHCb, along with the plans to develop the appropriate experimental techniques and a brief overview of the accelerator upgrade. Only some key points of the upgrade program of the four major experiments are discussed in this report; more information can be found in the references given at the end.

  14. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  15. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  16. Kinetic Modeling of Radiative Turbulence in Relativistic Astrophysical Plasmas: Particle Acceleration and High-Energy Flares

    NASA Astrophysics Data System (ADS)

    Uzdensky, Dmitri

    Relativistic astrophysical plasma environments routinely produce intense high-energy emission, which is often observed to be nonthermal and rapidly flaring. The recently discovered gamma-ray (> 100 MeV) flares in Crab Pulsar Wind Nebula (PWN) provide a quintessential illustration of this, but other notable examples include relativistic active galactic nuclei (AGN) jets, including blazars, and Gamma-ray Bursts (GRBs). Understanding the processes responsible for the very efficient and rapid relativistic particle acceleration and subsequent emission that occurs in these sources poses a strong challenge to modern high-energy astrophysics, especially in light of the necessity to overcome radiation reaction during the acceleration process. Magnetic reconnection and collisionless shocks have been invoked as possible mechanisms. However, the inferred extreme particle acceleration requires the presence of coherent electric-field structures. How such large-scale accelerating structures (such as reconnecting current sheets) can spontaneously arise in turbulent astrophysical environments still remains a mystery. The proposed project will conduct a first-principles computational and theoretical study of kinetic turbulence in relativistic collisionless plasmas with a special focus on nonthermal particle acceleration and radiation emission. The main computational tool employed in this study will be the relativistic radiative particle-in-cell (PIC) code Zeltron, developed by the team members at the Univ. of Colorado. This code has a unique capability to self-consistently include the synchrotron and inverse-Compton radiation reaction force on the relativistic particles, while simultaneously computing the resulting observable radiative signatures. This proposal envisions performing massively parallel, large-scale three-dimensional simulations of driven and decaying kinetic turbulence in physical regimes relevant to real astrophysical systems (such as the Crab PWN), including the

  17. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the

  18. An Experimental Study of a Pulsed Electromagnetic Plasma Accelerator

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Eskridge, Richard; Lee, Mike; Smith, James; Martin, Adam; Markusic, Tom E.; Cassibry, Jason T.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Experiments are being performed on the NASA Marshall Space Flight Center (MSFC) pulsed electromagnetic plasma accelerator (PEPA-0). Data produced from the experiments provide an opportunity to further understand the plasma dynamics in these thrusters via detailed computational modeling. The detailed and accurate understanding of the plasma dynamics in these devices holds the key towards extending their capabilities in a number of applications, including their applications as high power (greater than 1 MW) thrusters, and their use for producing high-velocity, uniform plasma jets for experimental purposes. For this study, the 2-D MHD modeling code, MACH2, is used to provide detailed interpretation of the experimental data. At the same time, a 0-D physics model of the plasma initial phase is developed to guide our 2-D modeling studies.

  19. High field gradient particle accelerator

    DOEpatents

    Nation, J.A.; Greenwald, S.

    1989-05-30

    A high electric field gradient electron accelerator utilizing short duration, microwave radiation, and capable of operating at high field gradients for high energy physics applications or at reduced electric field gradients for high average current intermediate energy accelerator applications is disclosed. Particles are accelerated in a smooth bore, periodic undulating waveguide, wherein the period is so selected that the particles slip an integral number of cycles of the r.f. wave every period of the structure. This phase step of the particles produces substantially continuous acceleration in a traveling wave without transverse magnetic or other guide means for the particle. 10 figs.

  20. High field gradient particle accelerator

    DOEpatents

    Nation, John A.; Greenwald, Shlomo

    1989-01-01

    A high electric field gradient electron accelerator utilizing short duration, microwave radiation, and capable of operating at high field gradients for high energy physics applications or at reduced electric field gradients for high average current intermediate energy accelerator applications. Particles are accelerated in a smooth bore, periodic undulating waveguide, wherein the period is so selected that the particles slip an integral number of cycles of the r.f. wave every period of the structure. This phase step of the particles produces substantially continuous acceleration in a traveling wave without transverse magnetic or other guide means for the particle.

  1. Observation of acceleration and deceleration in gigaelectron-volt-per-metre gradient dielectric wakefield accelerators

    DOE PAGES

    O’Shea, B. D.; Andonian, G.; Barber, S. K.; ...

    2016-09-14

    There is urgent need to develop new acceleration techniques capable of exceeding gigaelectron-volt-per-metre (GeV m –1) gradients in order to enable future generations of both light sources and high-energy physics experiments. To address this need, short wavelength accelerators based on wakefields, where an intense relativistic electron beam radiates the demanded fields directly into the accelerator structure or medium, are currently under intense investigation. One such wakefield based accelerator, the dielectric wakefield accelerator, uses a dielectric lined-waveguide to support a wakefield used for acceleration. Here we show gradients of 1.347±0.020 GeV m –1 using a dielectric wakefield accelerator of 15 cmmore » length, with sub-millimetre transverse aperture, by measuring changes of the kinetic state of relativistic electron beams. We follow this measurement by demonstrating accelerating gradients of 320±17 MeV m –1. As a result, both measurements improve on previous measurements by and order of magnitude and show promise for dielectric wakefield accelerators as sources of high-energy electrons.« less

  2. Observation of acceleration and deceleration in gigaelectron-volt-per-metre gradient dielectric wakefield accelerators

    PubMed Central

    O'Shea, B. D.; Andonian, G.; Barber, S. K.; Fitzmorris, K. L.; Hakimi, S.; Harrison, J.; Hoang, P. D.; Hogan, M. J.; Naranjo, B.; Williams, O. B.; Yakimenko, V.; Rosenzweig, J. B.

    2016-01-01

    There is urgent need to develop new acceleration techniques capable of exceeding gigaelectron-volt-per-metre (GeV m−1) gradients in order to enable future generations of both light sources and high-energy physics experiments. To address this need, short wavelength accelerators based on wakefields, where an intense relativistic electron beam radiates the demanded fields directly into the accelerator structure or medium, are currently under intense investigation. One such wakefield based accelerator, the dielectric wakefield accelerator, uses a dielectric lined-waveguide to support a wakefield used for acceleration. Here we show gradients of 1.347±0.020 GeV m−1 using a dielectric wakefield accelerator of 15 cm length, with sub-millimetre transverse aperture, by measuring changes of the kinetic state of relativistic electron beams. We follow this measurement by demonstrating accelerating gradients of 320±17 MeV m−1. Both measurements improve on previous measurements by and order of magnitude and show promise for dielectric wakefield accelerators as sources of high-energy electrons. PMID:27624348

  3. LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    NASA Astrophysics Data System (ADS)

    Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor

    2017-12-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.

  4. The effect of mathematical model development on the instruction of acceleration to introductory physics students

    NASA Astrophysics Data System (ADS)

    Sauer, Tim Allen

    The purpose of this study was to evaluate the effectiveness of utilizing student constructed theoretical math models when teaching acceleration to high school introductory physics students. The goal of the study was for the students to be able to utilize mathematical modeling strategies to improve their problem solving skills, as well as their standardized scientific and conceptual understanding. This study was based on mathematical modeling research, conceptual change research and constructivist theory of learning, all of which suggest that mathematical modeling is an effective way to influence students' conceptual connectiveness and sense making of formulaic equations and problem solving. A total of 48 students in two sections of high school introductory physics classes received constructivist, inquiry-based, cooperative learning, and conceptual change-oriented instruction. The difference in the instruction for the 24 students in the mathematical modeling treatment group was that they constructed every formula they needed to solve problems from data they collected. In contrast, the instructional design for the control group of 24 students allowed the same instruction with assigned problems solved with formulas given to them without explanation. The results indicated that the mathematical modeling students were able to solve less familiar and more complicated problems with greater confidence and mental flexibility than the control group students. The mathematical modeling group maintained fewer alternative conceptions consistently in the interviews than did the control group. The implications for acceleration instruction from these results were discussed.

  5. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  6. Chirped pulse inverse free-electron laser vacuum accelerator

    DOEpatents

    Hartemann, Frederic V.; Baldis, Hector A.; Landahl, Eric C.

    2002-01-01

    A chirped pulse inverse free-electron laser (IFEL) vacuum accelerator for high gradient laser acceleration in vacuum. By the use of an ultrashort (femtosecond), ultrahigh intensity chirped laser pulse both the IFEL interaction bandwidth and accelerating gradient are increased, thus yielding large gains in a compact system. In addition, the IFEL resonance condition can be maintained throughout the interaction region by using a chirped drive laser wave. In addition, diffraction can be alleviated by taking advantage of the laser optical bandwidth with negative dispersion focusing optics to produce a chromatic line focus. The combination of these features results in a compact, efficient vacuum laser accelerator which finds many applications including high energy physics, compact table-top laser accelerator for medical imaging and therapy, material science, and basic physics.

  7. Plasma inverse transition acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Ming

    It can be proved fundamentally from the reciprocity theorem with which the electromagnetism is endowed that corresponding to each spontaneous process of radiation by a charged particle there is an inverse process which defines a unique acceleration mechanism, from Cherenkov radiation to inverse Cherenkov acceleration (ICA) [1], from Smith-Purcell radiation to inverse Smith-Purcell acceleration (ISPA) [2], and from undulator radiation to inverse undulator acceleration (IUA) [3]. There is no exception. Yet, for nearly 30 years after each of the aforementioned inverse processes has been clarified for laser acceleration, inverse transition acceleration (ITA), despite speculation [4], has remained the least understood,more » and above all, no practical implementation of ITA has been found, until now. Unlike all its counterparts in which phase synchronism is established one way or the other such that a particle can continuously gain energy from an acceleration wave, the ITA to be discussed here, termed plasma inverse transition acceleration (PITA), operates under fundamentally different principle. As a result, the discovery of PITA has been delayed for decades, waiting for a conceptual breakthrough in accelerator physics: the principle of alternating gradient acceleration [5, 6, 7, 8, 9, 10]. In fact, PITA was invented [7, 8] as one of several realizations of the new principle.« less

  8. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  9. The implementation of physical safety system in bunker of the electron beam accelerator

    NASA Astrophysics Data System (ADS)

    Ahmad, M. A.; Hashim, S. A.; Ahmad, A.; Leo, K. W.; Chulan, R. M.; Dalim, Y.; Baijan, A. H.; Zain, M. F.; Ros, R. C.

    2017-01-01

    This paper describes the implementation of physical safety system for the new low energy electron beam (EB) accelerator installed at Block 43T Nuclear Malaysia. The low energy EB is a locally designed and developed with a target energy of 300 keV. The issues on radiation protection have been addressed by the installation of radiation shielding in the form of a bunker and installation radiation monitors. Additional precaution is needed to ensure that personnel are not exposed to radiation and other physical hazards. Unintentional access to the radiation room can cause serious hazard and hence safety features must be installed to prevent such events. In this work we design and built a control and monitoring system for the shielding door. The system provides signals to the EB control panel to allow or prevent operation. The design includes limit switches, key-activated switches and emergency stop button and surveillance camera. Entry procedure is also developed as written record and for information purposes. As a result, through this safety implementation human error will be prevented, increase alertness during operation and minimizing unnecessary radiation exposure.

  10. Figuring the Acceleration of the Simple Pendulum

    ERIC Educational Resources Information Center

    Lieberherr, Martin

    2011-01-01

    The centripetal acceleration has been known since Huygens' (1659) and Newton's (1684) time. The physics to calculate the acceleration of a simple pendulum has been around for more than 300 years, and a fairly complete treatise has been given by C. Schwarz in this journal. But sentences like "the acceleration is always directed towards the…

  11. Thomas Jefferson National Accelerator Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grames, Joseph; Higinbotham, Douglas; Montgomery, Hugh

    The Thomas Jefferson National Accelerator Facility (Jefferson Lab) in Newport News, Virginia, USA, is one of ten national laboratories under the aegis of the Office of Science of the U.S. Department of Energy (DOE). It is managed and operated by Jefferson Science Associates, LLC. The primary facility at Jefferson Lab is the Continuous Electron Beam Accelerator Facility (CEBAF) as shown in an aerial photograph in Figure 1. Jefferson Lab was created in 1984 as CEBAF and started operations for physics in 1995. The accelerator uses superconducting radio-frequency (srf) techniques to generate high-quality beams of electrons with high-intensity, well-controlled polarization. Themore » technology has enabled ancillary facilities to be created. The CEBAF facility is used by an international user community of more than 1200 physicists for a program of exploration and study of nuclear, hadronic matter, the strong interaction and quantum chromodynamics. Additionally, the exceptional quality of the beams facilitates studies of the fundamental symmetries of nature, which complement those of atomic physics on the one hand and of high-energy particle physics on the other. The facility is in the midst of a project to double the energy of the facility and to enhance and expand its experimental facilities. Studies are also pursued with a Free-Electron Laser produced by an energy-recovering linear accelerator.« less

  12. Efficacy of physical activity interventions in post-natal populations: systematic review, meta-analysis and content coding of behaviour change techniques.

    PubMed

    Gilinsky, Alyssa Sara; Dale, Hannah; Robinson, Clare; Hughes, Adrienne R; McInnes, Rhona; Lavallee, David

    2015-01-01

    This systematic review and meta-analysis reports the efficacy of post-natal physical activity change interventions with content coding of behaviour change techniques (BCTs). Electronic databases (MEDLINE, CINAHL and PsychINFO) were searched for interventions published from January 1980 to July 2013. Inclusion criteria were: (i) interventions including ≥1 BCT designed to change physical activity behaviour, (ii) studies reporting ≥1 physical activity outcome, (iii) interventions commencing later than four weeks after childbirth and (iv) studies including participants who had given birth within the last year. Controlled trials were included in the meta-analysis. Interventions were coded using the 40-item Coventry, Aberdeen & London - Refined (CALO-RE) taxonomy of BCTs and study quality assessment was conducted using Cochrane criteria. Twenty studies were included in the review (meta-analysis: n = 14). Seven were interventions conducted with healthy inactive post-natal women. Nine were post-natal weight management studies. Two studies included women with post-natal depression. Two studies focused on improving general well-being. Studies in healthy populations but not for weight management successfully changed physical activity. Interventions increased frequency but not volume of physical activity or walking behaviour. Efficacious interventions always included the BCTs 'goal setting (behaviour)' and 'prompt self-monitoring of behaviour'.

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  14. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  15. The small stellated dodecahedron code and friends.

    PubMed

    Conrad, J; Chamberland, C; Breuckmann, N P; Terhal, B M

    2018-07-13

    We explore a distance-3 homological CSS quantum code, namely the small stellated dodecahedron code, for dense storage of quantum information and we compare its performance with the distance-3 surface code. The data and ancilla qubits of the small stellated dodecahedron code can be located on the edges respectively vertices of a small stellated dodecahedron, making this code suitable for three-dimensional connectivity. This code encodes eight logical qubits into 30 physical qubits (plus 22 ancilla qubits for parity check measurements) in contrast with one logical qubit into nine physical qubits (plus eight ancilla qubits) for the surface code. We develop fault-tolerant parity check circuits and a decoder for this code, allowing us to numerically assess the circuit-based pseudo-threshold.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Authors.

  16. (Installation of the Vinca Accelerator)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zucker, A.

    1991-04-22

    I participated in a workshop of the physics with accelerators in Belgrade, Yugoslavia and chaired an advisory committee for the Vinca Accelerator Installation which is currently in progress. Also, I participated in meetings with the Serbian Academy of Sciences and with the Deputy Prime Minister of Serbia concerning the plans and aspirations of the nuclear research center at Vinca.

  17. PHAZR: A phenomenological code for holeboring in air

    NASA Astrophysics Data System (ADS)

    Picone, J. M.; Boris, J. P.; Lampe, M.; Kailasanath, K.

    1985-09-01

    This report describes a new code for studying holeboring by a charged particle beam, laser, or electric discharge in a gas. The coordinates which parameterize the channel are radial displacement (r) from the channel axis and distance (z) along the channel axis from the energy source. The code is primarily phenomenological that is, we use closed solutions of simple models in order to represent many of the effects which are important in holeboring. The numerical simplicity which we gain from the use of these solutions enables us to estimate the structure of channel over long propagation distances while using a minimum of computer time. This feature makes PHAZR a useful code for those studying and designing future systems. Of particular interest is the design and implementation of the subgrid turbulence model required to compute the enhanced channel cooling caused by asymmetry-driven turbulence. The approximate equations of Boris and Picone form the basis of the model which includes the effects of turbulent diffusion and fluid transport on the turbulent field itself as well as on the channel parameters. The primary emphasis here is on charged particle beams, and as an example, we present typical results for an ETA-like beam propagating in air. These calculations demonstrate how PHAZAR may be used to investigate accelerator parameter space and to isolate the important physical parameters which determine the holeboring properties of a given system. The comparison with two-dimensional calculations provide a calibration of the subgrid turbulence model.

  18. Competing explanations for cosmic acceleration or why is the expansion of the universe accelerating?

    NASA Astrophysics Data System (ADS)

    Ishak, Mustapha

    2012-06-01

    For more than a decade, a number of cosmological observations have been indicating that the expansion of the universe is accelerating. Cosmic acceleration and the questions associated with it have become one of the most challenging and puzzling problems in cosmology and physics. Cosmic acceleration can be caused by (i) a repulsive dark energy pervading the universe, (ii) an extension to General Relativity that takes effect at cosmological scales of distance, or (iii) the acceleration may be an apparent effect due to the fact that the expansion rate of space-time is uneven from one region to another in the universe. I will review the basics of these possibilities and provide some recent results including ours on these questions.

  19. Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence

    NASA Astrophysics Data System (ADS)

    Lynn, Jacob William

    We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no

  20. An ion accelerator for undergraduate research and teaching

    NASA Astrophysics Data System (ADS)

    Monce, Michael

    1997-04-01

    We have recently upgraded our 400kV, single beam line ion accelerator to a 1MV, multiple beam line machine. This upgrade has greatly expanded the opportunities for student involvement in the laboratory. We will describe four areas of work in which students now participate. The first is the continuing research being conducted in excitations produced in ion-molecule collisions, which recently involved the use of digital imaging. The second area of research now opened up by the new accelerator involves PIXE. We are currently beginning a cross disciplinary study of archaeological specimens using PIXE and involving students from both anthropology and physics. Finally, two beam lines from the accelerator will be used for basic work in nuclear physics: Rutherford scattering and nuclear resonances. These two nuclear physics experiments will be integrated into our sophomore-junior level, year-long course in experimental physics.

  1. Toward GPGPU accelerated human electromechanical cardiac simulations

    PubMed Central

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492

  2. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  3. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  4. LEGO - A Class Library for Accelerator Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunhai

    1998-11-19

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulatemore » the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given.« less

  5. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Harnett, N; Department of Radiation Oncology, University of Toronto, Toronto, Ontario

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in amore » professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.« less

  6. Accelerators for America's Future

    NASA Astrophysics Data System (ADS)

    Bai, Mei

    2016-03-01

    Particle accelerator, a powerful tool to energize beams of charged particles to a desired speed and energy, has been the working horse for investigating the fundamental structure of matter and fundermental laws of nature. Most known examples are the 2-mile long Stanford Linear Accelerator at SLAC, the high energy proton and anti-proton collider Tevatron at FermiLab, and Large Hadron Collider that is currently under operation at CERN. During the less than a century development of accelerator science and technology that led to a dazzling list of discoveries, particle accelerators have also found various applications beyond particle and nuclear physics research, and become an indispensible part of the economy. Today, one can find a particle accelerator at almost every corner of our lives, ranging from the x-ray machine at the airport security to radiation diagnostic and therapy in hospitals. This presentation will give a brief introduction of the applications of this powerful tool in fundermental research as well as in industry. Challenges in accelerator science and technology will also be briefly presented

  7. Accelerator shield design of KIPT neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Gohar, Y.

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generatedmore » by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain

  8. Heavy ion linear accelerator for radiation damage studies of materials

    NASA Astrophysics Data System (ADS)

    Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.; Nolen, Jerry; Barcikowski, Albert; Pellin, Michael; Yacout, Abdellatif

    2017-03-01

    A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response of the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for 238U50+ and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.

  9. Heavy ion linear accelerator for radiation damage studies of materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.

    A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response ofmore » the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for U-238(50+) and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.« less

  10. The Los Alamos suite of relativistic atomic physics codes

    DOE PAGES

    Fontes, C. J.; Zhang, H. L.; Jr, J. Abdallah; ...

    2015-05-28

    The Los Alamos SuitE of Relativistic (LASER) atomic physics codes is a robust, mature platform that has been used to model highly charged ions in a variety of ways. The suite includes capabilities for calculating data related to fundamental atomic structure, as well as the processes of photoexcitation, electron-impact excitation and ionization, photoionization and autoionization within a consistent framework. These data can be of a basic nature, such as cross sections and collision strengths, which are useful in making predictions that can be compared with experiments to test fundamental theories of highly charged ions, such as quantum electrodynamics. The suitemore » can also be used to generate detailed models of energy levels and rate coefficients, and to apply them in the collisional-radiative modeling of plasmas over a wide range of conditions. Such modeling is useful, for example, in the interpretation of spectra generated by a variety of plasmas. In this work, we provide a brief overview of the capabilities within the Los Alamos relativistic suite along with some examples of its application to the modeling of highly charged ions.« less

  11. Corkscrew Motion of an Electron Beam due to Coherent Variations in Accelerating Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August

    2016-09-13

    Corkscrew motion results from the interaction of fluctuations of beam electron energy with accidental magnetic dipoles caused by misalignment of the beam transport solenoids. Corkscrew is a serious concern for high-current linear induction accelerators (LIA). A simple scaling law for corkscrew amplitude derived from a theory based on a constant-energy beam coasting through a uniform magnetic field has often been used to assess LIA vulnerability to this effect. We use a beam dynamics code to verify that this scaling also holds for an accelerated beam in a non-uniform magnetic field, as in a real accelerator. Results of simulations with thismore » code are strikingly similar to measurements on one of the LIAs at Los Alamos National Laboratory.« less

  12. Highly-Damped Spectral Acceleration as a Ground Motion Intensity Measure for Estimating Collapse Vulnerability of Buildings

    NASA Astrophysics Data System (ADS)

    Buyco, K.; Heaton, T. H.

    2016-12-01

    Current U.S. seismic code and performance-based design recommendations quantify ground motion intensity using 5%-damped spectral acceleration when estimating the collapse vulnerability of buildings. This intensity measure works well for predicting inter-story drift due to moderate shaking, but other measures have been shown to be better for estimating collapse risk.We propose using highly-damped (>10%) spectral acceleration to assess collapse vulnerability. As damping is increased, the spectral acceleration at a given period T begins to behave like a weighted average of the corresponding lowly-damped (i.e. 5%) spectrum at a range of periods. Weights for periods longer than T increase as damping increases. Using high damping is physically intuitive for two reasons. Firstly, ductile buildings dissipate a large amount of hysteretic energy before collapse and thus behave more like highly-damped systems. Secondly, heavily damaged buildings experience period-lengthening, giving further credence to the weighted-averaging property of highly-damped spectral acceleration.To determine the optimal damping value(s) for this ground motion intensity measure, we conduct incremental dynamic analysis for a suite of ground motions on several different mid-rise steel buildings and select the damping value yielding the lowest dispersion of intensity at the collapse threshold. Spectral acceleration calculated with damping as high as 70% has been shown to be a better indicator of collapse than that with 5% damping.

  13. Electron Accelerator Shielding Design of KIPT Neutron Source Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Zhaopeng; Gohar, Yousry

    The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nuclear specialists. The biological shield of the accelerator building was designed to reduce the biological dose to less than 5.0e-03 mSv/h during operation. The main source of the biologicalmore » dose for the accelerator building is the photons and neutrons generated from different interactions of leaked electrons from the electron gun and the accelerator sections with the surrounding components and materials. The Monte Carlo N-particle extended code (MCNPX) was used for the shielding calculations because of its capability to perform electron-, photon-, and neutron-coupled transport simulations. The photon dose was tallied using the MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is very small, similar to 0.01 neutron for 100-MeV electron and even smaller for lower-energy electrons. This causes difficulties for the Monte Carlo analyses and consumes tremendous computation resources for tallying the neutron dose outside the shield boundary with an acceptable accuracy. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were utilized for this study. The generated neutrons were banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron dose. The weight windows variance reduction technique was also utilized for both neutron

  14. Application of particle accelerators in research.

    PubMed

    Mazzitelli, Giovanni

    2011-07-01

    Since the beginning of the past century, accelerators have started to play a fundamental role as powerful tools to discover the world around us, how the universe has evolved since the big bang and to develop fundamental instruments for everyday life. Although more than 15 000 accelerators are operating around the world only a very few of them are dedicated to fundamental research. An overview of the present high energy physics (HEP) accelerator status and prospectives is presented.

  15. Collisionless Shocks and Particle Acceleration.

    NASA Astrophysics Data System (ADS)

    Malkov, M.

    2016-12-01

    Collisionless shocks emerged in the 50s and 60s of the last century as an important branch of plasma physics and have remained ever since. New applications pose new challenges to our understanding of collisionless shock mechanisms. Particle acceleration in astrophysical settings, primarily studied concerning the putative origin of cosmic rays (CR) in supernova remnant (SNR) shocks, stands out with the collisionless shock mechanism being the key. Among recent laboratory applications, a laser-based tabletop proton accelerator is an affordable compact alternative to big synchrotron accelerators. The much-anticipated proof of cosmic ray (CR) acceleration in supernova remnants is hindered by our limited understanding of collisionless shock mechanisms. Over the last decade, dramatically improved observations were puzzling the theorists with unexpected discoveries. The difference between the helium/carbon and proton CR rigidity (momentum to charge ratio) spectra, seemingly inconsistent with the acceleration and propagation theories, and the perplexing positron excess in the 10-300 GeV range are just two recent examples. The latter is now also actively discussed in the particle physics and CR communities as a possible signature of decay or annihilation of hypothetical dark matter particles. By considering an initial (injection) phase of a diffusive shock acceleration mechanism, including particle reflection off the shock front - where an elemental similarity of particle dynamics does not apply - I will discuss recent suggestions of how to address the new data from the collisionless shock perspective. The backreaction of accelerated particles on the shock structure, its environment, and visibility across the electromagnetic spectrum from radio to gamma rays is another key aspect of collisionless shock that will be discussed.

  16. Accurate and efficient spin integration for particle accelerators

    DOE PAGES

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less

  17. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  18. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  19. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  20. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  1. pycola: N-body COLA method code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias

    2015-09-01

    pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

  2. Use of color-coded sleeve shutters accelerates oscillograph channel selection

    NASA Technical Reports Server (NTRS)

    Bouchlas, T.; Bowden, F. W.

    1967-01-01

    Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.

  3. Introduction to Particle Acceleration in the Cosmos

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Horwitz, J. L.; Perez, J.; Quenby, J.

    2005-01-01

    Accelerated charged particles have been used on Earth since 1930 to explore the very essence of matter, for industrial applications, and for medical treatments. Throughout the universe nature employs a dizzying array of acceleration processes to produce particles spanning twenty orders of magnitude in energy range, while shaping our cosmic environment. Here, we introduce and review the basic physical processes causing particle acceleration, in astrophysical plasmas from geospace to the outer reaches of the cosmos. These processes are chiefly divided into four categories: adiabatic and other forms of non-stochastic acceleration, magnetic energy storage and stochastic acceleration, shock acceleration, and plasma wave and turbulent acceleration. The purpose of this introduction is to set the stage and context for the individual papers comprising this monograph.

  4. LIGHT SOURCE: Physical design of a 10 MeV LINAC for polymer radiation processing

    NASA Astrophysics Data System (ADS)

    Feng, Guang-Yao; Pei, Yuan-Ji; Wang, Lin; Zhang, Shan-Cai; Wu, Cong-Feng; Jin, Kai; Li, Wei-Min

    2009-06-01

    In China, polymer radiation processing has become one of the most important processing industries. The radiation processing source may be an electron beam accelerator or a radioactive source. Physical design of an electron beam facility applied for radiation crosslinking is introduced in this paper because of it's much higher dose rate and efficiency. Main part of this facility is a 10 MeV travelling wave electron linac with constant impedance accelerating structure. A start to end simulation concerning the linac is reported in this paper. The codes Opera-3d, Poisson-superfish and Parmela are used to describe electromagnetic elements of the accelerator and track particle distribution from the cathode to the end of the linac. After beam dynamic optimization, wave phase velocities in the structure have been chosen to be 0.56, 0.9 and 0.999 respectively. Physical parameters about the main elements such as DC electron gun, iris-loaded periodic structure, solenoids, etc, are presented. Simulation results proves that it can satisfy the industrial requirement. The linac is under construction. Some components have been finished. Measurements proved that they are in a good agreement with the design values.

  5. Laser-driven dielectric electron accelerator for radiobiology researches

    NASA Astrophysics Data System (ADS)

    Koyama, Kazuyoshi; Matsumura, Yosuke; Uesaka, Mitsuru; Yoshida, Mitsuhiro; Natsui, Takuya; Aimierding, Aimidula

    2013-05-01

    In order to estimate the health risk associated with a low dose radiation, the fundamental process of the radiation effects in a living cell must be understood. It is desired that an electron bunch or photon pulse precisely knock a cell nucleus and DNA. The required electron energy and electronic charge of the bunch are several tens keV to 1 MeV and 0.1 fC to 1 fC, respectively. The smaller beam size than micron is better for the precise observation. Since the laser-driven dielectric electron accelerator seems to suite for the compact micro-beam source, a phase-modulation-masked-type laser-driven dielectric accelerator was studied. Although the preliminary analysis made a conclusion that a grating period and an electron speed must satisfy the matching condition of LG/λ = v/c, a deformation of a wavefront in a pillar of the grating relaxed the matching condition and enabled the slow electron to be accelerated. The simulation results by using the free FDTD code, Meep, showed that the low energy electron of 20 keV felt the acceleration field strength of 20 MV/m and gradually felt higher field as the speed was increased. Finally the ultra relativistic electron felt the field strength of 600 MV/m. The Meep code also showed that a length of the accelerator to get energy of 1 MeV was 3.8 mm, the required laser power and energy were 11 GW and 350 mJ, respectively. Restrictions on the laser was eased by adopting sequential laser pulses. If the accelerator is illuminated by sequential N pulses, the pulse power, pulse width and the pulse energy are reduced to 1/N, 1/N and 1/N2, respectively. The required laser power per pulse is estimated to be 2.2 GW when ten pairs of sequential laser pulse is irradiated.

  6. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research

    NASA Astrophysics Data System (ADS)

    Mocker, Anna; Bugiel, Sebastian; Auer, Siegfried; Baust, Günter; Colette, Andrew; Drake, Keith; Fiege, Katherina; Grün, Eberhard; Heckmann, Frieder; Helfert, Stefan; Hillier, Jonathan; Kempf, Sascha; Matt, Günter; Mellert, Tobias; Munsat, Tobin; Otto, Katharina; Postberg, Frank; Röser, Hans-Peter; Shu, Anthony; Sternovsky, Zoltán; Srama, Ralf

    2011-09-01

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut für Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s-1. Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s-1 and with diameters of between 0.05 μm and 5 μm. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and charges, and is

  7. Accelerator science and technology in Europe: EuCARD 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the third annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.

  8. Seismic site coefficients and acceleration design response spectra based on conditions in South Carolina : final report.

    DOT National Transportation Integrated Search

    2014-11-15

    The simplified procedure in design codes for determining earthquake response spectra involves : estimating site coefficients to adjust available rock accelerations to site accelerations. Several : investigators have noted concerns with the site coeff...

  9. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  10. Physics through the 1990s: Nuclear physics

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The volume begins with a non-mathematical introduction to nuclear physics. A description of the major advances in the field follows, with chapters on nuclear structure and dynamics, fundamental forces in the nucleus, and nuclei under extreme conditions of temperature, density, and spin. Impacts of nuclear physics on astrophysics and the scientific and societal benefits of nuclear physics are then discussed. Another section deals with scientific frontiers, describing research into the realm of the quark-gluon plasma; the changing description of nuclear matter, specifically the use of the quark model; and the implications of the standard model and grand unified theories of elementary-particle physics; and finishes with recommendations and priorities for nuclear physics research facilities, instrumentation, accelerators, theory, education, and data bases. Appended are a list of national accelerator facilities, a list of reviewers, a bibliography, and a glossary.

  11. Λ CDM is Consistent with SPARC Radial Acceleration Relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, B. W.; Wadsley, J. W., E-mail: kellerbw@mcmaster.ca

    2017-01-20

    Recent analysis of the Spitzer Photometry and Accurate Rotation Curve (SPARC) galaxy sample found a surprisingly tight relation between the radial acceleration inferred from the rotation curves and the acceleration due to the baryonic components of the disk. It has been suggested that this relation may be evidence for new physics, beyond Λ CDM . In this Letter, we show that 32 galaxies from the MUGS2 match the SPARC acceleration relation. These cosmological simulations of star-forming, rotationally supported disks were simulated with a WMAP3 Λ CDM cosmology, and match the SPARC acceleration relation with less scatter than the observational data.more » These results show that this acceleration relation is a consequence of dissipative collapse of baryons, rather than being evidence for exotic dark-sector physics or new dynamical laws.« less

  12. Empirical evidence for acceleration-dependent amplification factors

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-specific amplification factors, Fa and Fv, used in current U.S. building codes decrease with increasing base acceleration level as implied by the Loma Prieta earthquake at 0.1g and extrapolated using numerical models and laboratory results. The Northridge earthquake recordings of 17 January 1994 and subsequent geotechnical data permit empirical estimates of amplification at base acceleration levels up to 0.5g. Distance measures and normalization procedures used to infer amplification ratios from soil-rock pairs in predetermined azimuth-distance bins significantly influence the dependence of amplification estimates on base acceleration. Factors inferred using a hypocentral distance norm do not show a statistically significant dependence on base acceleration. Factors inferred using norms implied by the attenuation functions of Abrahamson and Silva show a statistically significant decrease with increasing base acceleration. The decrease is statistically more significant for stiff clay and sandy soil (site class D) sites than for stiffer sites underlain by gravely soils and soft rock (site class C). The decrease in amplification with increasing base acceleration is more pronounced for the short-period amplification factor, Fa, than for the midperiod factor, Fv.

  13. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  14. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  15. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  16. UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.

    A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)

  17. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  18. The United States Particle Accelerator School: Educating the Next Generation of Accelerator Scientists and Engineers

    NASA Astrophysics Data System (ADS)

    Barletta, William A.

    2009-03-01

    Only a handful of universities in the US offer any formal training in accelerator science. The United States Particle Accelerator School (USPAS) is National Graduate Educational Program that has developed a highly successful educational paradigm that, over the past twenty-years, has granted more university credit in accelerator/beam science and technology than any university in the world. Sessions are held twice annually, hosted by major US research universities that approve course credit, certify the USPAS faculty, and grant course credit. The USPAS paradigm is readily extensible to other rapidly developing, cross-disciplinary research areas such as high energy density physics.

  19. Accelerated test plan for nickel cadmium spacecraft batteries

    NASA Technical Reports Server (NTRS)

    Hennigan, T. J.

    1973-01-01

    An accelerated test matrix is outlined that includes acceptance, baseline and post-cycling tests, chemical and physical analyses, and the data analysis procedures to be used in determining the feasibility of an accelerated test for sealed, nickel cadmium cells.

  20. Accelerator-based validation of shielding codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeitlin, Cary; Heilbronn, Lawrence; Miller, Jack

    2002-08-12

    The space radiation environment poses risks to astronaut health from a diverse set of sources, ranging from low-energy protons and electrons to highly-charged, high-energy atomic nuclei and their associated fragmentation products, including neutrons. The low-energy protons and electrons are the source of most of the radiation dose to Shuttle and ISS crews, while the more energetic particles that comprise the Galactic Cosmic Radiation (protons, He, and heavier nuclei up to Fe) will be the dominant source for crews on long-duration missions outside the earth's magnetic field. Because of this diversity of sources, a broad ground-based experimental effort is required tomore » validate the transport and shielding calculations used to predict doses and dose-equivalents under various mission scenarios. The experimental program of the LBNL group, described here, focuses principally on measurements of charged particle and neutron production in high-energy heavy-ion fragmentation. Other aspects of the program include measurements of the shielding provided by candidate spacesuit materials against low-energy protons (particularly relevant to extra-vehicular activities in low-earth orbit), and the depth-dose relations in tissue for higher-energy protons. The heavy-ion experiments are performed at the Brookhaven National Laboratory's Alternating Gradient Synchrotron and the Heavy-Ion Medical Accelerator in Chiba in Japan. Proton experiments are performed at the Lawrence Berkeley National Laboratory's 88'' Cyclotron with a 55 MeV beam, and at the Loma Linda University Proton Facility with 100 to 250 MeV beam energies. The experimental results are an important component of the overall shielding program, as they allow for simple, well-controlled tests of the models developed to handle the more complex radiation environment in space.« less

  1. Acceleration technologies for charged particles: an introduction

    NASA Astrophysics Data System (ADS)

    Carter, Richard G.

    2011-01-01

    Particle accelerators have many important uses in scientific experiments, in industry and in medicine. This paper reviews the variety of technologies which are used to accelerate charged particles to high energies. It aims to show how the capabilities and limitations of these technologies are related to underlying physical principles. The paper emphasises the way in which different technologies are used together to convey energy from the electrical supply to the accelerated particles.

  2. A preliminary design of the collinear dielectric wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Zholents, A.; Gai, W.; Doran, S.; Lindberg, R.; Power, J. G.; Strelnikov, N.; Sun, Y.; Trakhtenberg, E.; Vasserman, I.; Jing, C.; Kanareykin, A.; Li, Y.; Gao, Q.; Shchegolkov, D. Y.; Simakov, E. I.

    2016-09-01

    A preliminary design of the multi-meter long collinear dielectric wakefield accelerator that achieves a highly efficient transfer of the drive bunch energy to the wakefields and to the witness bunch is considered. It is made from 0.5 m long accelerator modules containing a vacuum chamber with dielectric-lined walls, a quadrupole wiggler, an rf coupler, and BPM assembly. The single bunch breakup instability is a major limiting factor for accelerator efficiency, and the BNS damping is applied to obtain the stable multi-meter long propagation of a drive bunch. Numerical simulations using a 6D particle tracking computer code are performed and tolerances to various errors are defined.

  3. Amplitude-dependent orbital period in alternating gradient accelerators

    DOE PAGES

    Machida, S.; Kelliher, D. J.; Edmonds, C. S.; ...

    2016-03-16

    Orbital period in a ring accelerator and time of flight in a linear accelerator depend on the amplitude of betatron oscillations. The variation is negligible in ordinary particle accelerators with relatively small beam emittance. In an accelerator for large emittance beams like muons and unstable nuclei, however, this effect cannot be ignored. In this study, we measured orbital period in a linear non-scaling fixed-field alternating-gradient accelerator, which is a candidate for muon acceleration, and compared it with the theoretical prediction. The good agreement between them gives important ground for the design of particle accelerators for a new generation of particlemore » and nuclear physics experiments.« less

  4. Simon van der Meer (1925-2011):. A Modest Genius of Accelerator Science

    NASA Astrophysics Data System (ADS)

    Chohan, Vinod C.

    2011-02-01

    Simon van der Meer was a brilliant scientist and a true giant of accelerator science. His seminal contributions to accelerator science have been essential to this day in our quest for satisfying the demands of modern particle physics. Whether we talk of long base-line neutrino physics or antiproton-proton physics at Fermilab or proton-proton physics at LHC, his techniques and inventions have been a vital part of the modern day successes. Simon van der Meer and Carlo Rubbia were the first CERN scientists to become Nobel laureates in Physics, in 1984. Van der Meer's lesserknown contributions spanned a whole range of subjects in accelerator science, from magnet design to power supply design, beam measurements, slow beam extraction, sophisticated programs and controls.

  5. The Four Lives of a Nuclear Accelerator

    NASA Astrophysics Data System (ADS)

    Wiescher, Michael

    2017-06-01

    Electrostatic accelerators have emerged as a major tool in research and industry in the second half of the twentieth century. In particular in low energy nuclear physics they have been essential for addressing a number of critical research questions from nuclear structure to nuclear astrophysics. This article describes this development on the example of a single machine which has been used for nearly sixty years at the forefront of scientific research in nuclear physics. The article summarizes the concept of electrostatic accelerators and outlines how this accelerator developed from a bare support function to an independent research tool that has been utilized in different research environments and institutions and now looks forward to a new life as part of the experiment CASPAR at the 4,850" level of the Sanford Underground Research Facility.

  6. Policy challenges in the fight against childhood obesity: low adherence in San Diego area schools to the California Education Code regulating physical education.

    PubMed

    Consiglieri, G; Leon-Chi, L; Newfield, R S

    2013-01-01

    Assess the adherence to the Physical Education (PE) requirements per California Education Code in San Diego area schools. Surveys were administered anonymously to children and adolescents capable of physical activity, visiting a specialty clinic at Rady Children's Hospital San Diego. The main questions asked were their gender, grade, PE classes per week, and time spent doing PE. 324 surveys were filled, with 36 charter-school students not having to abide by state code excluded. We report on 288 students (59% females), mostly Hispanic (43%) or Caucasian (34%). In grades 1-6, 66.7% reported under the 200 min per 10 school days required by the PE code. Only 20.7% had daily PE. Average PE days/week was 2.6. In grades 7-12, 42.2% had reported under the 400 min per 10 school days required. Daily PE was noted in 47.8%. Average PE days/week was 3.4. Almost 17% had no PE, more so in the final two grades of high school (45.7%). There is low adherence to the California Physical Education mandate in the San Diego area, contributing to poor fitness and obesity. Lack of adequate PE is most evident in grades 1-6 and grades 11-12. Better resources, awareness, and enforcement are crucial.

  7. Naked singularities as particle accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Mandar; Joshi, Pankaj S.

    We investigate here the particle acceleration by naked singularities to arbitrarily high center of mass energies. Recently it has been suggested that black holes could be used as particle accelerators to probe the Planck scale physics. We show that the naked singularities serve the same purpose and probably would do better than their black hole counterparts. We focus on the scenario of a self-similar gravitational collapse starting from a regular initial data, leading to the formation of a globally naked singularity. It is seen that when particles moving along timelike geodesics interact and collide near the Cauchy horizon, the energymore » of collision in the center of mass frame will be arbitrarily high, thus offering a window to Planck scale physics.« less

  8. Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations

    DTIC Science & Technology

    2017-05-08

    NUMBER (Include area code) 08 May 2017 Briefing Charts 05 April 2017 - 08 May 2017 Using Kokkos for Performant Cross-Platform Acceleration of Liquid ...ERC Incorporated RQRC AFRL-West Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations 2DISTRIBUTION A: Approved for... Liquid Rocket Combustion Simulation SPACE simulation of rotating detonation engine (courtesy of Dr. Christopher Lietz) 3DISTRIBUTION A: Approved

  9. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  10. (U) Ristra Next Generation Code Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming amore » common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.« less

  11. Transport, Acceleration and Spatial Access of Solar Energetic Particles

    NASA Astrophysics Data System (ADS)

    Borovikov, D.; Sokolov, I.; Effenberger, F.; Jin, M.; Gombosi, T. I.

    2017-12-01

    Solar Energetic Particles (SEPs) are a major branch of space weather. Often driven by Coronal Mass Ejections (CMEs), SEPs have a very high destructive potential, which includes but is not limited to disrupting communication systems on Earth, inflicting harmful and potentially fatal radiation doses to crew members onboard spacecraft and, in extreme cases, to people aboard high altitude flights. However, currently the research community lacks efficient tools to predict such hazardous SEP events. Such a tool would serve as the first step towards improving humanity's preparedness for SEP events and ultimately its ability to mitigate their effects. The main goal of the presented research is to develop a computational tool that provides the said capabilities and meets the community's demand. Our model has the forecasting capability and can be the basis for operational system that will provide live information on the current potential threats posed by SEPs based on observations of the Sun. The tool comprises several numerical models, which are designed to simulate different physical aspects of SEPs. The background conditions in the interplanetary medium, in particular, the Coronal Mass Ejection driving the particle acceleration, play a defining role and are simulated with the state-of-the-art MHD solver, Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme (BATS-R-US). The newly developed particle code, Multiple-Field-Line-Advection Model for Particle Acceleration (M-FLAMPA), simulates the actual transport and acceleration of SEPs and is coupled to the MHD code. The special property of SEPs, the tendency to follow magnetic lines of force, is fully taken advantage of in the computational model, which substitutes a complicated 3-D model with a multitude of 1-D models. This approach significantly simplifies computations and improves the time performance of the overall model. Also, it plays an important role of mapping the affected region by connecting it with the origin of

  12. Measurement of heat load density profile on acceleration grid in MeV-class negative ion accelerator.

    PubMed

    Hiratsuka, Junichi; Hanada, Masaya; Kojima, Atsushi; Umeda, Naotaka; Kashiwagi, Mieko; Miyamoto, Kenji; Yoshida, Masafumi; Nishikiori, Ryo; Ichikawa, Masahiro; Watanabe, Kazuhiro; Tobari, Hiroyuki

    2016-02-01

    To understand the physics of the negative ion extraction/acceleration, the heat load density profile on the acceleration grid has been firstly measured in the ITER prototype accelerator where the negative ions are accelerated to 1 MeV with five acceleration stages. In order to clarify the profile, the peripheries around the apertures on the acceleration grid were separated into thermally insulated 34 blocks with thermocouples. The spatial resolution is as low as 3 mm and small enough to measure the tail of the beam profile with a beam diameter of ∼16 mm. It was found that there were two peaks of heat load density around the aperture. These two peaks were also clarified to be caused by the intercepted negative ions and secondary electrons from detailed investigation by changing the beam optics and gas density profile. This is the first experimental result, which is useful to understand the trajectories of these particles.

  13. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  14. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  15. Elementary particle physics

    NASA Technical Reports Server (NTRS)

    Perkins, D. H.

    1986-01-01

    Elementary particle physics is discussed. Status of the Standard Model of electroweak and strong interactions; phenomena beyond the Standard Model; new accelerator projects; and possible contributions from non-accelerator experiments are examined.

  16. Muon simulation codes MUSIC and MUSUN for underground physics

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, V. A.

    2009-03-01

    The paper describes two Monte Carlo codes dedicated to muon simulations: MUSIC (MUon SImulation Code) and MUSUN (MUon Simulations UNderground). MUSIC is a package for muon transport through matter. It is particularly useful for propagating muons through large thickness of rock or water, for instance from the surface down to underground/underwater laboratory. MUSUN is designed to use the results of muon transport through rock/water to generate muons in or around underground laboratory taking into account their energy spectrum and angular distribution.

  17. Fluid Physics in a Fluctuating Acceleration Environment

    NASA Technical Reports Server (NTRS)

    Drolet, Francois; Vinals, Jorge

    1999-01-01

    Our program of research aims at developing a stochastic description of the residual acceleration field onboard spacecraft (g-jitter) to describe in quantitative detail its effect on fluid motion. Our main premise is that such a statistical description is necessary in those cases in which the characteristic time scales of the process under investigation are long compared with the correlation time of g-jitter. Although a clear separation between time scales makes this approach feasible, there remain several difficulties of practical nature: (i), g-jitter time series are not statistically stationary but rather show definite dependences on factors such as active or rest crew periods; (ii), it is very difficult to extract reliably the low frequency range of the power spectrum of the acceleration field. This range controls the magnitude of diffusive processes; and (iii), models used to date are Gaussian, but there is evidence that large amplitude disturbances occur much more frequently than a Gaussian distribution would predict. The lack of stationarity does not constitute a severe limitation in practice, since the intensity of the stochastic components changes very slowly during space missions (perhaps over times of the order of hours). A separate analysis of large amplitude disturbances has not been undertaken yet, but it does not seem difficult a priori to devise models that may describe this range better than a Gaussian distribution. The effect of low frequency components, on the other hand, is more difficult to ascertain, partly due to the difficulty associated with measuring them, and partly because they may be indistinguishable from slowly changing averages. This latter effect is further complicated by the lack of statistical stationarity of the time series. Recent work has focused on the effect of stochastic modulation on the onset of oscillatory instabilities as an example of resonant interaction between the driving acceleration and normal modes of the system

  18. Particle acceleration on a chip: A laser-driven micro-accelerator for research and industry

    NASA Astrophysics Data System (ADS)

    Yoder, R. B.; Travish, G.

    2013-03-01

    Particle accelerators are conventionally built from radio-frequency metal cavities, but this technology limits the maximum energy available and prevents miniaturization. In the past decade, laser-powered acceleration has been intensively studied as an alternative technology promising much higher accelerating fields in a smaller footprint and taking advantage of recent advances in photonics. Among the more promising approaches are those based on dielectric field-shaping structures. These ``dielectric laser accelerators'' (DLAs) scale with the laser wavelength employed and can be many orders of magnitude smaller than conventional accelerators; DLAs may enable the production of high-intensity, ultra-short relativistic electron bunches in a chip-scale device. When combined with a high- Z target or an optical-period undulator, these systems could produce high-brilliance x-rays from a breadbox-sized device having multiple applications in imaging, medicine, and homeland security. In our research program we have developed one such DLA, the Micro-Accelerator Platform (MAP). We describe the fundamental physics, our fabrication and testing program, and experimental results to date, along with future prospects for MAP-based light-sources and some remaining challenges. Supported in part by the Defense Threat Reduction Agency and National Nuclear Security Administration.

  19. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mocker, Anna; Bugiel, Sebastian; Srama, Ralf

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flightmore » mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut fuer Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s{sup -1}. Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s{sup -1} and with diameters of between 0.05 {mu}m and 5 {mu}m. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds

  20. Particle acceleration at a reconnecting magnetic separator

    NASA Astrophysics Data System (ADS)

    Threlfall, J.; Neukirch, T.; Parnell, C. E.; Eradat Oskoui, S.

    2015-02-01

    Context. While the exact acceleration mechanism of energetic particles during solar flares is (as yet) unknown, magnetic reconnection plays a key role both in the release of stored magnetic energy of the solar corona and the magnetic restructuring during a flare. Recent work has shown that special field lines, called separators, are common sites of reconnection in 3D numerical experiments. To date, 3D separator reconnection sites have received little attention as particle accelerators. Aims: We investigate the effectiveness of separator reconnection as a particle acceleration mechanism for electrons and protons. Methods: We study the particle acceleration using a relativistic guiding-centre particle code in a time-dependent kinematic model of magnetic reconnection at a separator. Results: The effect upon particle behaviour of initial position, pitch angle, and initial kinetic energy are examined in detail, both for specific (single) particle examples and for large distributions of initial conditions. The separator reconnection model contains several free parameters, and we study the effect of changing these parameters upon particle acceleration, in particular in view of the final particle energy ranges that agree with observed energy spectra.

  1. TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame

    NASA Astrophysics Data System (ADS)

    Teck Chee, Chia; Hong, Chia Yee

    1999-03-01

    Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.

  2. Beamlets from stochastic acceleration

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Carbone, Vincenzo

    2008-09-01

    We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.

  3. Advanced Multi-Physics (AMP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby

    2012-06-01

    The Advanced Multi-Physics (AMP) code, in its present form, will allow a user to build a multi-physics application code for existing mechanics and diffusion operators and extend them with user-defined material models and new physics operators. There are examples that demonstrate mechanics, thermo-mechanics, coupled diffusion, and mechanical contact. The AMP code is designed to leverage a variety of mathematical solvers (PETSc, Trilinos, SUNDIALS, and AMP solvers) and mesh databases (LibMesh and AMP) in a consistent interchangeable approach.

  4. The Los Alamos Laser Acceleration of Particles Workshop and beginning of the advanced accelerator concepts field

    NASA Astrophysics Data System (ADS)

    Joshi, C.

    2012-12-01

    The first Advanced Acceleration of Particles-AAC-Workshop (actually named Laser Acceleration of Particles Workshop) was held at Los Alamos in January 1982. The workshop lasted a week and divided all the acceleration techniques into four categories: near field, far field, media, and vacuum. Basic theorems of particle acceleration were postulated (later proven) and specific experiments based on the four categories were formulated. This landmark workshop led to the formation of the advanced accelerator R&D program in the HEP office of the DOE that supports advanced accelerator research to this day. Two major new user facilities at Argonne and Brookhaven and several more directed experimental efforts were built to explore the advanced particle acceleration schemes. It is not an exaggeration to say that the intellectual breadth and excitement provided by the many groups who entered this new field provided the needed vitality to then recently formed APS Division of Beams and the new online journal Physical Review Special Topics-Accelerators and Beams. On this 30th anniversary of the AAC Workshops, it is worthwhile to look back at the legacy of the first Workshop at Los Alamos and the fine groundwork it laid for the field of advanced accelerator concepts that continues to flourish to this day.

  5. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  6. The EGS5 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users ofmore » EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4

  7. Numerical studies of acceleration of thorium ions by a laser pulse of ultra-relativistic intensity

    NASA Astrophysics Data System (ADS)

    Domanski, Jaroslaw; Badziak, Jan

    2018-01-01

    One of the key scientific projects of ELI-Nuclear Physics is to study the production of extremely neutron-rich nuclides by a new reaction mechanism called fission-fusion using laser-accelerated thorium (232Th) ions. This research is of crucial importance for understanding the nature of the creation of heavy elements in the Universe; however, they require Th ion beams of very high beam fluencies and intensities which are inaccessible in conventional accelerators. This contribution is a first attempt to investigate the possibility of the generation of intense Th ion beams by a fs laser pulse of ultra-relativistic intensity. The investigation was performed with the use of fully electromagnetic relativistic particle-in-cell code. A sub-μm thorium target was irradiated by a circularly polarized 20-fs laser pulse of intensity up to 1023 W/cm2, predicted to be attainable at ELI-NP. At the laser intensity 1023 W/cm2 and an optimum target thickness, the maximum energies of Th ions approach 9.3 GeV, the ion beam intensity is > 1020 W/cm2 and the total ion fluence reaches values 1019 ions/cm2. The last two values are much higher than attainable in conventional accelerators and are fairly promising for the planned ELI-NP experiment.

  8. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...

    2017-10-17

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  9. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  10. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2018-01-01

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.

  11. High gradient tests of metallic mm-wave accelerating structures

    DOE PAGES

    Dal Forno, Massimo; Dolgashev, Valery; Bowden, Gordon; ...

    2017-05-10

    This study explores the physics of vacuum rf breakdowns in high gradient mm-wave accelerating structures. We performed a series of experiments with 100 GHz and 200 GHz metallic accelerating structures, at the Facility for Advanced Accelerator Experimental Tests (FACET) at the SLAC National Accelerator Laboratory. This paper presents the experimental results of rf tests of 100 GHz travelling-wave accelerating structures, made of hard copper-silver alloy. The results are compared with pure hard copper structures. The rf fields were excited by the FACET ultra-relativistic electron beam. The accelerating structures have open geometries, 10 cm long, composed of two halves separated bymore » a variable gap. The rf frequency of the fundamental accelerating mode depends on the gap size and can be changed from 90 GHz to 140 GHz. The measured frequency and pulse length are consistent with our simulations. When the beam travels off-axis, a deflecting field is induced in addition to the decelerating longitudinal field. We measured the deflecting forces by observing the displacement of the electron bunch and used this measurement to verify the expected accelerating gradient. We present the first quantitative measurement of rf breakdown rates in 100 GHz copper-silver accelerating structure, which was 10 –3 per pulse, with peak electric field of 0.42 GV/m, an accelerating gradient of 127 MV/m, at a pulse length of 2.3 ns. The goal of our studies is to understand the physics of gradient limitations in order to increase the energy reach of future accelerators.« less

  12. High gradient tests of metallic mm-wave accelerating structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dal Forno, Massimo; Dolgashev, Valery; Bowden, Gordon

    This study explores the physics of vacuum rf breakdowns in high gradient mm-wave accelerating structures. We performed a series of experiments with 100 GHz and 200 GHz metallic accelerating structures, at the Facility for Advanced Accelerator Experimental Tests (FACET) at the SLAC National Accelerator Laboratory. This paper presents the experimental results of rf tests of 100 GHz travelling-wave accelerating structures, made of hard copper-silver alloy. The results are compared with pure hard copper structures. The rf fields were excited by the FACET ultra-relativistic electron beam. The accelerating structures have open geometries, 10 cm long, composed of two halves separated bymore » a variable gap. The rf frequency of the fundamental accelerating mode depends on the gap size and can be changed from 90 GHz to 140 GHz. The measured frequency and pulse length are consistent with our simulations. When the beam travels off-axis, a deflecting field is induced in addition to the decelerating longitudinal field. We measured the deflecting forces by observing the displacement of the electron bunch and used this measurement to verify the expected accelerating gradient. We present the first quantitative measurement of rf breakdown rates in 100 GHz copper-silver accelerating structure, which was 10 –3 per pulse, with peak electric field of 0.42 GV/m, an accelerating gradient of 127 MV/m, at a pulse length of 2.3 ns. The goal of our studies is to understand the physics of gradient limitations in order to increase the energy reach of future accelerators.« less

  13. Additions and improvements to the high energy density physics capabilities in the FLASH code

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.

    2016-10-01

    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.

  14. PelePhysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PelePhysics is a suite of physics packages that provides functionality of use to reacting hydrodynamics CFD codes. The initial release includes an interface to reaction rate mechanism evaluation, transport coefficient evaluation, and a generalized equation of state (EOS) facility. Both generic evaluators and interfaces to code from externally available tools (Fuego for chemical rates, EGLib for transport coefficients) are provided.

  15. Benchmark of neutron production cross sections with Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Tsai, Pi-En; Lai, Bo-Lun; Heilbronn, Lawrence H.; Sheu, Rong-Jiun

    2018-02-01

    Aiming to provide critical information in the fields of heavy ion therapy, radiation shielding in space, and facility design for heavy-ion research accelerators, the physics models in three Monte Carlo simulation codes - PHITS, FLUKA, and MCNP6, were systematically benchmarked with comparisons to fifteen sets of experimental data for neutron production cross sections, which include various combinations of 12C, 20Ne, 40Ar, 84Kr and 132Xe projectiles and natLi, natC, natAl, natCu, and natPb target nuclides at incident energies between 135 MeV/nucleon and 600 MeV/nucleon. For neutron energies above 60% of the specific projectile energy per nucleon, the LAQGMS03.03 in MCNP6, the JQMD/JQMD-2.0 in PHITS, and the RQMD-2.4 in FLUKA all show a better agreement with data in heavy-projectile systems than with light-projectile systems, suggesting that the collective properties of projectile nuclei and nucleon interactions in the nucleus should be considered for light projectiles. For intermediate-energy neutrons whose energies are below the 60% projectile energy per nucleon and above 20 MeV, FLUKA is likely to overestimate the secondary neutron production, while MCNP6 tends towards underestimation. PHITS with JQMD shows a mild tendency for underestimation, but the JQMD-2.0 model with a modified physics description for central collisions generally improves the agreement between data and calculations. For low-energy neutrons (below 20 MeV), which are dominated by the evaporation mechanism, PHITS (which uses GEM linked with JQMD and JQMD-2.0) and FLUKA both tend to overestimate the production cross section, whereas MCNP6 tends to underestimate more systems than to overestimate. For total neutron production cross sections, the trends of the benchmark results over the entire energy range are similar to the trends seen in the dominate energy region. Also, the comparison of GEM coupled with either JQMD or JQMD-2.0 in the PHITS code indicates that the model used to describe the first

  16. Mount Aragats as a stable electron accelerator for atmospheric high-energy physics research

    NASA Astrophysics Data System (ADS)

    Chilingarian, Ashot; Hovsepyan, Gagik; Mnatsakanyan, Eduard

    2016-03-01

    Observation of the numerous thunderstorm ground enhancements (TGEs), i.e., enhanced fluxes of electrons, gamma rays, and neutrons detected by particle detectors located on the Earth's surface and related to the strong thunderstorms above it, helped to establish a new scientific topic—high-energy physics in the atmosphere. Relativistic runaway electron avalanches (RREAs) are believed to be a central engine initiating high-energy processes in thunderstorm atmospheres. RREAs observed on Mount Aragats in Armenia during the strongest thunderstorms and simultaneous measurements of TGE electron and gamma-ray energy spectra proved that RREAs are a robust and realistic mechanism for electron acceleration. TGE research facilitates investigations of the long-standing lightning initiation problem. For the last 5 years we were experimenting with the "beams" of "electron accelerators" operating in the thunderclouds above the Aragats research station. Thunderstorms are very frequent above Aragats, peaking in May-June, and almost all of them are accompanied with enhanced particle fluxes. The station is located on a plateau at an altitude 3200 asl near a large lake. Numerous particle detectors and field meters are located in three experimental halls as well as outdoors; the facilities are operated all year round. All relevant information is being gathered, including data on particle fluxes, fields, lightning occurrences, and meteorological conditions. By the example of the huge thunderstorm that took place at Mount Aragats on August 28, 2015, we show that simultaneous detection of all the relevant data allowed us to reveal the temporal pattern of the storm development and to investigate the atmospheric discharges and particle fluxes.

  17. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  18. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes

    PubMed Central

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation. PMID:28582389

  19. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes.

    PubMed

    Einkemmer, Lukas

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation.

  20. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  1. Designing a Dielectric Laser Accelerator on a Chip

    NASA Astrophysics Data System (ADS)

    Niedermayer, Uwe; Boine-Frankenheim, Oliver; Egenolf, Thilo

    2017-07-01

    Dielectric Laser Acceleration (DLA) achieves gradients of more than 1GeV/m, which are among the highest in non-plasma accelerators. The long-term goal of the ACHIP collaboration is to provide relativistic (>1 MeV) electrons by means of a laser driven microchip accelerator. Examples of ’’slightly resonant” dielectric structures showing gradients in the range of 70% of the incident laser field (1 GV/m) for electrons with beta=0.32 and 200% for beta=0.91 are presented. We demonstrate the bunching and acceleration of low energy electrons in dedicated ballistic buncher and velocity matched grating structures. However, the design gradient of 500 MeV/m leads to rapid defocusing. Therefore we present a scheme to bunch the beam in stages, which does not only reduce the energy spread, but also the transverse defocusing. The designs are made with a dedicated homemade 6D particle tracking code.

  2. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  3. Compact torus accelerator as a driver for ICF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tobin, M.T.; Meier, W.R.; Morse, E.C.

    1986-01-01

    The authors have carried out further investigations of the technical issues associated with using a compact torus (CT) accelerator as a driver for inertial confinement fusion (ICF). In a CT accelerator, a magnetically confined, torus-shaped plasma is compressed, accelerated, and focused by two concentric electrodes. After its initial formation, the torus shape is maintained for lifetimes exceeding 1 ms by inherent poloidal and toroidal currents. Hartman suggests acceleration and focusing of such a plasma ring will not cause dissolution within certain constraints. In this study, we evaluated a point design based on an available capacitor bank energy of 9.2 MJ.more » This accelerator, which was modeled by a zero-dimensional code, produces a xenon plasma ring with a 0.73-cm radius, a velocity of 4.14 x 10/sup 9/ cm/s, and a mass of 4.42 ..mu..g. The energy of the plasma ring as it leaves the accelerator is 3.8 MJ, or 41% of the capacitor bank energy. Our studies confirm the feasibility of producing a plasma ring with the characteristics required to induce fusion in an ICF target with a gain greater than 50. The low cost and high efficiency of the CT accelerator are particularly attractive. Uncertainties concerning propagation, accelerator lifetime, and power supply must be resolved to establish the viability of the accelerator as an ICF driver.« less

  4. The Role of Substorms in Storm-time Particle Acceleration

    NASA Astrophysics Data System (ADS)

    Daglis, Ioannis A.; Kamide, Yohsuke

    The terrestrial magnetosphere has the capability to rapidly accelerate charged particles up to very high energies over relatively short times and distances. Acceleration of charged particles is an essential ingredient of both magnetospheric substorms and space storms. In the case of space storms, the ultimate result is a bulk flow of electric charge through the inner magnetosphere, commonly known as the ring current. Syun-Ichi Akasofu and Sydney Chapman, two of the early pioneers in space physics, postulated that the bulk acceleration of particles during storms is rather the additive result of partial acceleration during consecutive substorms. This paradigm has been heavily disputed during recent years. The new case is that substorm acceleration may be sufficient to produce individual high-energy particles that create auroras and possibly harm spacecraft, but it cannot produce the massive acceleration that constitutes a storm. This paper is a critical review of the long-standing issue of the storm-substorm relationship, or—in other words—the capability or necessity of substorms in facilitating or driving the build-up of the storm-time ring current. We mainly address the physical effect itself, i.e. the bulk acceleration of particles, and not the diagnostic of the process, i.e. the Dst index, which is rather often the case. Within the framework of particle acceleration, substorms retain their storm-importance due to the potential of substorm-induced impulsive electric fields in obtaining the massive ion acceleration needed for the storm-time ring current buildup.

  5. Accelerated observers and the notion of singular spacetime

    NASA Astrophysics Data System (ADS)

    Olmo, Gonzalo J.; Rubiera-Garcia, Diego; Sanchez-Puente, Antonio

    2018-03-01

    Geodesic completeness is typically regarded as a basic criterion to determine whether a given spacetime is regular or singular. However, the principle of general covariance does not privilege any family of observers over the others and, therefore, observers with arbitrary motions should be able to provide a complete physical description of the world. This suggests that in a regular spacetime, all physically acceptable observers should have complete paths. In this work we explore this idea by studying the motion of accelerated observers in spherically symmetric spacetimes and illustrate it by considering two geodesically complete black hole spacetimes recently described in the literature. We show that for bound and locally unbound accelerations, the paths of accelerated test particles are complete, providing further support to the regularity of such spacetimes.

  6. Calculations of beam dynamics in Sandia linear electron accelerators, 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poukey, J.W.; Coleman, P.D.

    1985-03-01

    A number of code and analytic studies were made during 1984 which pertain to the Sandia linear accelerators MABE and RADLAC. In this report the authors summarize the important results of the calculations. New results include a better understanding of gap-induced radial oscillations, leakage currents in a typical MABE gas, emittance growth in a beam passing through a series of gaps, some new diocotron results, and the latest diode simulations for both accelerators. 23 references, 30 figures, 1 table.

  7. Operational and design aspects of accelerators for medical applications

    NASA Astrophysics Data System (ADS)

    Schippers, Jacobus Maarten; Seidel, Mike

    2015-03-01

    Originally, the typical particle accelerators as well as their associated beam transport equipment were designed for particle and nuclear physics research and applications in isotope production. In the past few decades, such accelerators and related equipment have also been applied for medical use. This can be in the original physics laboratory environment, but for the past 20 years also in hospital-based or purely clinical environments for particle therapy. The most important specific requirements of accelerators for radiation therapy with protons or ions will be discussed. The focus will be on accelerator design, operational, and formal aspects. We will discuss the special requirements to reach a high reliability for patient treatments as well as an accurate delivery of the dose at the correct position in the patient using modern techniques like pencil beam scanning. It will be shown that the technical requirements, safety aspects, and required reliability of the accelerated beam differ substantially from those in a nuclear physics laboratory. It will be shown that this difference has significant implications on the safety and interlock systems. The operation of such a medical facility should be possible by nonaccelerator specialists at different operating sites (treatment rooms). The organization and role of the control and interlock systems can be considered as being the most crucially important issue, and therefore a special, dedicated design is absolutely necessary in a facility providing particle therapy.

  8. Adapting smart phone applications about physics education to blind students

    NASA Astrophysics Data System (ADS)

    Bülbül, M. Ş.; Yiğit, N.; Garip, B.

    2016-04-01

    Today, most of necessary equipment in a physics laboratory are available for smartphone users via applications. Physics teachers may measure from acceleration to sound volume with its internal sensors. These sensors collect data and smartphone applications make the raw data visible. Teachers who do not have well-equipped laboratories at their schools may have an opportunity to conduct experiments with the help of smart phones. In this study, we analyzed possible open source physics education applications in terms of blind users in inclusive learning environments. All apps are categorized as partially, full or non-supported. The roles of blind learner’s friend during the application are categorized as reader, describer or user. Mentioned apps in the study are compared with additional opportunities like size and downloading rates. Out of using apps we may also get information about whether via internet and some other extra information for different experiments in physics lab. Q-codes reading or augmented reality are two other opportunity provided by smart phones for users in physics labs. We also summarized blind learner’s smartphone experiences from literature and listed some suggestions for application designers about concepts in physics.

  9. The International Committee for Future Accelerators (ICFA): 1976 to the present

    DOE PAGES

    Rubinstein, Roy

    2016-12-14

    The International Committee for Future Accelerators (ICFA) has been in existence now for four decades. It plays an important role in allowing discussions by the world particle physics community on the status and future of very large particle accelerators and the particle physics and related fields associated with them. Here, this paper gives some indication of what ICFA is and does, and also describes its involvement in some of the more important developments in the particle physics field since its founding.

  10. Sheath field dynamics from time-dependent acceleration of laser-generated positrons

    NASA Astrophysics Data System (ADS)

    Kerr, Shaun; Fedosejevs, Robert; Link, Anthony; Williams, Jackson; Park, Jaebum; Chen, Hui

    2017-10-01

    Positrons produced in ultraintense laser-matter interactions are accelerated by the sheath fields established by fast electrons, typically resulting in quasi-monoenergetic beams. Experimental results from OMEGA EP show higher order features developing in the positron spectra when the laser energy exceeds one kilojoule. 2D PIC simulations using the LSP code were performed to give insight into these spectral features. They suggest that for high laser energies multiple, distinct phases of acceleration can occur due to time-dependent sheath field acceleration. The detailed dynamics of positron acceleration will be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, and funded by LDRD 17-ERD-010.

  11. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  12. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  13. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  14. Methods of geometrical integration in accelerator physics

    NASA Astrophysics Data System (ADS)

    Andrianov, S. N.

    2016-12-01

    In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.

  15. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    part of their continuing efforts to support the users of the FLUKA code within the particle physics community. In keeping with the spirit of developing an evolving physics code, we are planning as part of this project, to participate in the efforts to validate the core FLUKA physics in ground-based accelerator test runs. The emphasis of these test runs will be the physics of greatest interest in the simulation of the space radiation environment. Such a tool will be of great value to planners, designers and operators of future space missions, as well as for the design of the vehicles and habitats to be used on such missions. It will also be of aid to future experiments of various kinds that may be affected at some level by the ambient radiation environment, or in the analysis of hybrid experiment designs that have been discussed for space-based astronomy and astrophysics. The tool will be of value to the Life Sciences personnel involved in the prediction and measurement of radiation doses experienced by the crewmembers on such missions. In addition, the tool will be of great use to the planners of experiments to measure and evaluate the space radiation environment itself. It can likewise be useful in the analysis of safe havens, hazard migration plans, and NASA's call for new research in composites and to NASA engineers modeling the radiation exposure of electronic circuits. This code will provide an important complimentary check on the predictions of analytic codes such as BRYNTRN/HZETRN that are presently used for many similar applications, and which have shortcomings that are more easily overcome with Monte Carlo type simulations. Finally, it is acknowledged that there are similar efforts based around the use of the GEANT4 Monte-Carlo transport code currently under development at CERN. It is our intention to make our software modular and sufficiently flexible to allow the parallel use of either FLUKA or GEANT4 as the physics transport engine.

  16. Inter-view prediction of intra mode decision for high-efficiency video coding-based multiview video coding

    NASA Astrophysics Data System (ADS)

    da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.

    2014-05-01

    Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.

  17. Generation of bright attosecond x-ray pulse trains via Thomson scattering from laser-plasma accelerators.

    PubMed

    Luo, W; Yu, T P; Chen, M; Song, Y M; Zhu, Z C; Ma, Y Y; Zhuo, H B

    2014-12-29

    Generation of attosecond x-ray pulse attracts more and more attention within the advanced light source user community due to its potentially wide applications. Here we propose an all-optical scheme to generate bright, attosecond hard x-ray pulse trains by Thomson backscattering of similarly structured electron beams produced in a vacuum channel by a tightly focused laser pulse. Design parameters for a proof-of-concept experiment are presented and demonstrated by using a particle-in-cell code and a four-dimensional laser-Compton scattering simulation code to model both the laser-based electron acceleration and Thomson scattering processes. Trains of 200 attosecond duration hard x-ray pulses holding stable longitudinal spacing with photon energies approaching 50 keV and maximum achievable peak brightness up to 1020 photons/s/mm2/mrad2/0.1%BW for each micro-bunch are observed. The suggested physical scheme for attosecond x-ray pulse trains generation may directly access the fastest time scales relevant to electron dynamics in atoms, molecules and materials.

  18. Ultra-High Gradient Channeling Acceleration in Nanostructures: Design/Progress of Proof-of-Concept (POC) Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Young Min; Green, A.; Lumpkin, A. H.

    2016-09-16

    A short bunch of relativistic particles or a short-pulse laser perturbs the density state of conduction electrons in a solid crystal and excites wakefields along atomic lattices in a crystal. Under a coupling condition the wakes, if excited, can accelerate channeling particles with TeV/m acceleration gradients in principle since the density of charge carriers (conduction electrons) in solids n 0 = ~ 10 20 – 10 23 cm -3 is significantly higher than what can be obtained in gaseous plasma. Nanostructures have some advantages over crystals for channeling applications of high power beams. The dechanneling rate can be reduced andmore » the beam acceptance increased by the large size of the channels. For beam-driven acceleration, a bunch length with a sufficient charge density would need to be in the range of the plasma wavelength to properly excite plasma wakefields, and channeled particle acceleration with the wakefields must occur before the ions in the lattices move beyond the restoring threshold. In the case of the excitation by short laser pulses, the dephasing length is appreciably increased with the larger channel, which enables channeled particles to gain sufficient amounts of energy. This paper describes simulation analyses on beam- and laser (X-ray)-driven accelerations in effective nanotube models obtained from Vsim and EPOCH codes. Experimental setups to detect wakefields are also outlined with accelerator facilities at Fermilab and NIU. In the FAST facility, the electron beamline was successfully commissioned at 50 MeV and it is being upgraded toward higher energies for electron accelerator R&D. The 50 MeV injector beamline of the facility is used for X-ray crystal-channeling radiation with a diamond target. It has been proposed to utilize the same diamond crystal for a channeling acceleration POC test. Another POC experiment is also designed for the NIU accelerator lab with time-resolved electron diffraction. Recently, a stable generation of single

  19. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Warren

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi

  20. Accelerated testing of space batteries

    NASA Technical Reports Server (NTRS)

    Mccallum, J.; Thomas, R. E.; Waite, J. H.

    1973-01-01

    An accelerated life test program for space batteries is presented that fully satisfies empirical, statistical, and physical criteria for validity. The program includes thermal and other nonmechanical stress analyses as well as mechanical stress, strain, and rate of strain measurements.

  1. Laboratory laser acceleration and high energy astrophysics: {gamma}-ray bursts and cosmic rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, T.; Takahashi, Y.

    1998-08-20

    Recent experimental progress in laser acceleration of charged particles (electrons) and its associated processes has shown that intense electromagnetic pulses can promptly accelerate charged particles to high energies and that their energy spectrum is quite hard. On the other hand some of the high energy astrophysical phenomena such as extremely high energy cosmic rays and energetic components of {gamma}-ray bursts cry for new physical mechanisms for promptly accelerating particles to high energies. The authors suggest that the basic physics involved in laser acceleration experiments sheds light on some of the underlying mechanisms and their energy spectral characteristics of the promptlymore » accelerated particles in these high energy astrophysical phenomena.« less

  2. Ponderomotive Acceleration in Coronal Loops

    NASA Astrophysics Data System (ADS)

    Dahlburg, Russell B.; Laming, J. Martin; Taylor, Brian; Obenschain, Keith

    2017-08-01

    Ponderomotive acceleration has been asserted to be a cause of the First Ionization Potential (FIP) effect, the by now well known enhancement in abundance by a factor of 3-4 over photospheric values of elements in the solar corona with FIP less than about 10 eV. It is shown here by means of numerical simulations that ponderomotive acceleration occurs in solar coronal loops, with the appropriate magnitude and direction, as a ``byproduct'' of coronal heating. The numerical simulations are performed with the HYPERION code, which solves the fully compressible three-dimensional magnetohydrodynamic equations including nonlinear thermal conduction and optically thin radiation. Numerical simulations of a coronal loops with an axial magnetic field from 0.005 Teslas to 0.02 Teslas and lengths from 25000 km to 75000 km are presented. In the simulations the footpoints of the axial loop magnetic field are convected by random, large-scale motions. There is a continuous formation and dissipation of field-aligned current sheets which act to heat the loop. As a consequence of coronal magnetic reconnection, small scale, high speed jets form. The familiar vortex quadrupoles form at reconnection sites. Between the magnetic footpoints and the corona the reconnection flow merges with the boundary flow. It is in this region that the ponderomotive acceleration occurs. Mirroring the character of the coronal reconnection, the ponderomotive acceleration is also found to be intermittent.

  3. SuperB Progress Report for Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagini, M.E.; Boni, R.; Boscolo, M.

    2012-02-14

    This report details the progress made in by the SuperB Project in the area of the Collider since the publication of the SuperB Conceptual Design Report in 2007 and the Proceedings of SuperB Workshop VI in Valencia in 2008. With this document we propose a new electron positron colliding beam accelerator to be built in Italy to study flavor physics in the B-meson system at an energy of 10 GeV in the center-of-mass. This facility is called a high luminosity B-factory with a project name 'SuperB'. This project builds on a long history of successful e+e- colliders built around themore » world, as illustrated in Figure 1.1. The key advances in the design of this accelerator come from recent successes at the DAFNE collider at INFN in Frascati, Italy, at PEP-II at SLAC in California, USA, and at KEKB at KEK in Tsukuba Japan, and from new concepts in beam manipulation at the interaction region (IP) called 'crab waist'. This new collider comprises of two colliding beam rings, one at 4.2 GeV and one at 6.7 GeV, a common interaction region, a new injection system at full beam energies, and one of the two beams longitudinally polarized at the IP. Most of the new accelerator techniques needed for this collider have been achieved at other recently completed accelerators including the new PETRA-3 light source at DESY in Hamburg (Germany) and the upgraded DAFNE collider at the INFN laboratory at Frascati (Italy), or during design studies of CLIC or the International Linear Collider (ILC). The project is to be designed and constructed by a worldwide collaboration of accelerator and engineering staff along with ties to industry. To save significant construction costs, many components from the PEP-II collider at SLAC will be recycled and used in this new accelerator. The interaction region will be designed in collaboration with the particle physics detector to guarantee successful mutual use. The accelerator collaboration will consist of several groups at present universities and

  4. PHYSICS OF OUR DAYS Physical conditions in potential accelerators of ultra-high-energy cosmic rays: updated Hillas plot and radiation-loss constraints

    NASA Astrophysics Data System (ADS)

    Ptitsyna, Kseniya V.; Troitsky, Sergei V.

    2010-10-01

    We review basic constraints on the acceleration of ultra-high-energy (UHE) cosmic rays (CRs) in astrophysical sources, namely, the geometric (Hillas) criterion and the restrictions from radiation losses in different acceleration regimes. Using the latest available astrophysical data, we redraw the Hillas plot and find potential UHECR accelerators. For the acceleration in the central engines of active galactic nuclei, we constrain the maximal UHECR energy for a given black hole mass. Among active galaxies, only the most powerful ones, radio galaxies and blazars, are able to accelerate protons to UHE, although acceleration of heavier nuclei is possible in much more abundant lower-power Seyfert galaxies.

  5. Ion Beam Facilities at the National Centre for Accelerator based Research using a 3 MV Pelletron Accelerator

    NASA Astrophysics Data System (ADS)

    Trivedi, T.; Patel, Shiv P.; Chandra, P.; Bajpai, P. K.

    A 3.0 MV (Pelletron 9 SDH 4, NEC, USA) low energy ion accelerator has been recently installed as the National Centre for Accelerator based Research (NCAR) at the Department of Pure & Applied Physics, Guru Ghasidas Vishwavidyalaya, Bilaspur, India. The facility is aimed to carried out interdisciplinary researches using ion beams with high current TORVIS (for H, He ions) and SNICS (for heavy ions) ion sources. The facility includes two dedicated beam lines, one for ion beam analysis (IBA) and other for ion implantation/ irradiation corresponding to switching magnet at +20 and -10 degree, respectively. Ions with 60 kV energy are injected into the accelerator tank where after stripping positively charged ions are accelerated up to 29 MeV for Au. The installed ion beam analysis techniques include RBS, PIXE, ERDA and channelling.

  6. Fermilab | Science | Particle Accelerators

    Science.gov Websites

    2,300 physicists from all over the world come to Fermilab to conduct experiments using particle particle physics to the next level, collaborating with scientists and laboratories around the world to help world leader in accelerator research, development and industrialization. Learn more about IARC. Fermilab

  7. Generation of low-emittance electron beams in electrostatic accelerators for FEL applications

    NASA Astrophysics Data System (ADS)

    Chen, Teng; Elias, Luis R.

    1995-02-01

    This paper reports results of transverse emittance studies and beam propagation in electrostatic accelerators for free electron laser applications. In particular, we discuss emittance growth analysis of a low current electron beam system consisting of a miniature thermoionic electron gun and a National Electrostatics Accelerator (NEC) tube. The emittance growth phenomenon is discussed in terms of thermal effects in the electron gun cathode and aberrations produced by field gradient changes occurring inside the electron gun and throughout the accelerator tube. A method of reducing aberrations using a magnetic solenoidal field is described. Analysis of electron beam emittance was done with the EGUN code. Beam propagation along the accelerator tube was studied using a cylindrically symmetric beam envelope equation that included beam self-fields and the external accelerator fields which were derived from POISSON simulations.

  8. Analyzing Collision Processes with the Smartphone Acceleration Sensor

    ERIC Educational Resources Information Center

    Vogt, Patrik; Kuhn, Jochen

    2014-01-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1 ). The physical issues in that case are manifold and apply, for example, to free fall, radial acceleration, several pendula, or the exploitation…

  9. The Bonn Electron Stretcher Accelerator ELSA: Past and future

    NASA Astrophysics Data System (ADS)

    Hillert, W.

    2006-05-01

    In 1953, it was decided to build a 500MeV electron synchrotron in Bonn. It came into operation 1958, being the first alternating gradient synchrotron in Europe. After five years of performing photoproduction experiments at this accelerator, a larger 2.5GeV electron synchrotron was built and set into operation in 1967. Both synchrotrons were running for particle physics experiments, until from 1982 to 1987 a third accelerator, the electron stretcher ring ELSA, was constructed and set up in a separate ring tunnel below the physics institute. ELSA came into operation in 1987, using the pulsed 2.5GeV synchrotron as pre-accelerator. ELSA serves either as storage ring producing synchrotron radiation, or as post-accelerator and pulse stretcher. Applying a slow extraction close to a third integer resonance, external electron beams with energies up to 3.5GeV and high duty factors are delivered to hadron physics experiments. Various photo- and electroproduction experiments, utilising the experimental set-ups PHOENICS, ELAN, SAPHIR, GDH and Crystal Barrel have been carried out. During the late 90's, a pulsed GaAs source of polarised electrons was constructed and set up at the accelerator. ELSA was upgraded in order to accelerate polarised electrons, compensating for depolarising resonances by applying the methods of fast tune jumping and harmonic closed orbit correction. With the experimental investigation of the GDH sum rule, the first experiment requiring a polarised beam and a polarised target was successfully performed at the accelerator. In the near future, the stretcher ring will be further upgraded to increase polarisation and current of the external electron beams. In addition, the aspects of an increase of the maximum energy to 5GeV using superconducting resonators will be investigated.

  10. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  11. Beam breakup in an advanced linear induction accelerator

    DOE PAGES

    Ekdahl, Carl August; Coleman, Joshua Eugene; McCuistian, Brian Trent

    2016-07-01

    Two linear induction accelerators (LIAs) have been in operation for a number of years at the Los Alamos Dual Axis Radiographic Hydrodynamic Test (DARHT) facility. A new multipulse LIA is being developed. We have computationally investigated the beam breakup (BBU) instability in this advanced LIA. In particular, we have explored the consequences of the choice of beam injector energy and the grouping of LIA cells. We find that within the limited range of options presently under consideration for the LIA architecture, there is little adverse effect on the BBU growth. The computational tool that we used for this investigation wasmore » the beam dynamics code linear accelerator model for DARHT (LAMDA). In conclusion, to confirm that LAMDA was appropriate for this task, we first validated it through comparisons with the experimental BBU data acquired on the DARHT accelerators.« less

  12. Particle Acceleration, Magnetic Field Generation in Relativistic Shocks

    NASA Technical Reports Server (NTRS)

    Nishikawa, Ken-Ichi; Hardee, P.; Hededal, C. B.; Richardson, G.; Sol, H.; Preece, R.; Fishman, G. J.

    2005-01-01

    Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.

  13. Accelerator Technology and High Energy Physics Experiments, Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    The paper is the second part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with accelerator technology and high energy physics experiments. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the XXXth Jubilee SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonicselectronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-275].

  14. Symplectic orbit and spin tracking code for all-electric storage rings

    NASA Astrophysics Data System (ADS)

    Talman, Richard M.; Talman, John D.

    2015-07-01

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring "trap." At the "magic" kinetic energy of 232.792 MeV, proton spins are "frozen," for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10-29e -cm or greater will produce a statistically significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial "symplectification"). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the

  15. Particle Acceleration at the Sun and in the Heliosphere

    NASA Technical Reports Server (NTRS)

    Reames, Donald V.

    1999-01-01

    Energetic particles are accelerated in rich profusion at sites throughout the heliosphere. They come from solar flares in the low corona, from shock waves driven outward by coronal mass ejections (CMEs), from planetary magnetospheres and bow shocks. They come from corotating interaction regions (CIRs) produced by high-speed streams in the solar wind, and from the heliospheric termination shock at the outer edge of the heliospheric cavity. We sample all these populations near Earth, but can distinguish them readily by their element and isotope abundances, ionization states, energy spectra, angular distributions and time behavior. Remote spacecraft have probed the spatial distributions of the particles and examined new sources in situ. Most acceleration sources can be "seen" only by direct observation of the particles; few photons are produced at these sites. Wave-particle interactions are an essential feature in acceleration sources and, for shock acceleration, new evidence of energetic-proton-generated waves has come from abundance variations and from local cross-field scattering. Element abundances often tell us the physics the source plasma itself, prior to acceleration. By comparing different populations, we learn more about the sources, and about the physics of acceleration and transport, than we can possibly learn from one source alone.

  16. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  17. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  18. Tristan code and its application

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  19. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trambauer, K.

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonablemore » accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running

  20. The National Transport Code Collaboration Module Library

    NASA Astrophysics Data System (ADS)

    Kritz, A. H.; Bateman, G.; Kinsey, J.; Pankin, A.; Onjun, T.; Redd, A.; McCune, D.; Ludescher, C.; Pletzer, A.; Andre, R.; Zakharov, L.; Lodestro, L.; Pearlstein, L. D.; Jong, R.; Houlberg, W.; Strand, P.; Wiley, J.; Valanju, P.; John, H. St.; Waltz, R.; Mandrekas, J.; Mau, T. K.; Carlsson, J.; Braams, B.

    2004-12-01

    This paper reports on the progress in developing a library of code modules under the auspices of the National Transport Code Collaboration (NTCC). Code modules are high quality, fully documented software packages with a clearly defined interface. The modules provide a variety of functions, such as implementing numerical physics models; performing ancillary functions such as I/O or graphics; or providing tools for dealing with common issues in scientific programming such as portability of Fortran codes. Researchers in the plasma community submit code modules, and a review procedure is followed to insure adherence to programming and documentation standards. The review process is designed to provide added confidence with regard to the use of the modules and to allow users and independent reviews to validate the claims of the modules' authors. All modules include source code; clear instructions for compilation of binaries on a variety of target architectures; and test cases with well-documented input and output. All the NTCC modules and ancillary information, such as current standards and documentation, are available from the NTCC Module Library Website http://w3.pppl.gov/NTCC. The goal of the project is to develop a resource of value to builders of integrated modeling codes and to plasma physics researchers generally. Currently, there are more than 40 modules in the module library.

  1. Laser acceleration

    NASA Astrophysics Data System (ADS)

    Tajima, T.; Nakajima, K.; Mourou, G.

    2017-02-01

    The fundamental idea of Laser Wakefield Acceleration (LWFA) is reviewed. An ultrafast intense laser pulse drives coherent wakefield with a relativistic amplitude robustly supported by the plasma. While the large amplitude of wakefields involves collective resonant oscillations of the eigenmode of the entire plasma electrons, the wake phase velocity ˜ c and ultrafastness of the laser pulse introduce the wake stability and rigidity. A large number of worldwide experiments show a rapid progress of this concept realization toward both the high-energy accelerator prospect and broad applications. The strong interest in this has been spurring and stimulating novel laser technologies, including the Chirped Pulse Amplification, the Thin Film Compression, the Coherent Amplification Network, and the Relativistic Mirror Compression. These in turn have created a conglomerate of novel science and technology with LWFA to form a new genre of high field science with many parameters of merit in this field increasing exponentially lately. This science has triggered a number of worldwide research centers and initiatives. Associated physics of ion acceleration, X-ray generation, and astrophysical processes of ultrahigh energy cosmic rays are reviewed. Applications such as X-ray free electron laser, cancer therapy, and radioisotope production etc. are considered. A new avenue of LWFA using nanomaterials is also emerging.

  2. Study on radiation production in the charge stripping section of the RISP linear accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Oranj, Leila Mokhtari; Lee, Hee-Seock; Ko, Seung-Kook

    2015-02-01

    The linear accelerator of the Rare Isotope Science Project (RISP) accelerates 200 MeV/nucleon 238U ions in a multi-charge states. Many kinds of radiations are generated while the primary beam is transported along the beam line. The stripping process using thin carbon foil leads to complicated radiation environments at the 90-degree bending section. The charge distribution of 238U ions after the carbon charge stripper was calculated by using the LISE++ program. The estimates of the radiation environments were carried out by using the well-proved Monte Carlo codes PHITS and FLUKA. The tracks of 238U ions in various charge states were identified using the magnetic field subroutine of the PHITS code. The dose distribution caused by U beam losses for those tracks was obtained over the accelerator tunnel. A modified calculation was applied for tracking the multi-charged U beams because the fundamental idea of PHITS and FLUKA was to transport fully-ionized ion beam. In this study, the beam loss pattern after a stripping section was observed, and the radiation production by heavy ions was studied. Finally, the performance of the PHITS and the FLUKA codes was validated for estimating the radiation production at the stripping section by applying a modified method.

  3. Reinventing the Accelerator for the High Energy Frontier

    ScienceCinema

    Rosenzweig, James [UCLA, Los Angeles, California, United States

    2017-12-09

    The history of discovery in high-energy physics has been intimately connected with progress in methods of accelerating particles for the past 75 years. This remains true today, as the post-LHC era in particle physics will require significant innovation and investment in a superconducting linear collider. The choice of the linear collider as the next-generation discovery machine, and the selection of superconducting technology has rather suddenly thrown promising competing techniques -- such as very large hadron colliders, muon colliders, and high-field, high frequency linear colliders -- into the background. We discuss the state of such conventional options, and the likelihood of their eventual success. We then follow with a much longer view: a survey of a new, burgeoning frontier in high energy accelerators, where intense lasers, charged particle beams, and plasmas are all combined in a cross-disciplinary effort to reinvent the accelerator from its fundamental principles on up.

  4. Physics at the SPS.

    PubMed

    Gatignon, L

    2018-05-01

    The CERN Super Proton Synchrotron (SPS) has delivered a variety of beams to a vigorous fixed target physics program since 1978. In this paper, we restrict ourselves to the description of a few illustrative examples in the ongoing physics program at the SPS. We will outline the physics aims of the COmmon Muon Proton Apparatus for Structure and Spectroscopy (COMPASS), north area 64 (NA64), north area 62 (NA62), north area 61 (NA61), and advanced proton driven plasma wakefield acceleration experiment (AWAKE). COMPASS studies the structure of the proton and more specifically of its spin. NA64 searches for the dark photon A', which is the messenger for interactions between normal and dark matter. The NA62 experiment aims at a 10% precision measurement of the very rare decay K + → π + νν. As this decay mode can be calculated very precisely in the Standard Model, it offers a very good opportunity to look for new physics beyond the Standard Model. The NA61/SHINE experiment studies the phase transition to Quark Gluon Plasma, a state in which the quarks and gluons that form the proton and the neutron are de-confined. Finally, AWAKE investigates proton-driven wake field acceleration: a promising technique to accelerate electrons with very high accelerating gradients. The Physics Beyond Colliders study at CERN is paving the way for a significant and diversified continuation of this already rich and compelling physics program that is complementary to the one at the big colliders like the Large Hadron Collider.

  5. Physics at the SPS

    NASA Astrophysics Data System (ADS)

    Gatignon, L.

    2018-05-01

    The CERN Super Proton Synchrotron (SPS) has delivered a variety of beams to a vigorous fixed target physics program since 1978. In this paper, we restrict ourselves to the description of a few illustrative examples in the ongoing physics program at the SPS. We will outline the physics aims of the COmmon Muon Proton Apparatus for Structure and Spectroscopy (COMPASS), north area 64 (NA64), north area 62 (NA62), north area 61 (NA61), and advanced proton driven plasma wakefield acceleration experiment (AWAKE). COMPASS studies the structure of the proton and more specifically of its spin. NA64 searches for the dark photon A', which is the messenger for interactions between normal and dark matter. The NA62 experiment aims at a 10% precision measurement of the very rare decay K+ → π+νν. As this decay mode can be calculated very precisely in the Standard Model, it offers a very good opportunity to look for new physics beyond the Standard Model. The NA61/SHINE experiment studies the phase transition to Quark Gluon Plasma, a state in which the quarks and gluons that form the proton and the neutron are de-confined. Finally, AWAKE investigates proton-driven wake field acceleration: a promising technique to accelerate electrons with very high accelerating gradients. The Physics Beyond Colliders study at CERN is paving the way for a significant and diversified continuation of this already rich and compelling physics program that is complementary to the one at the big colliders like the Large Hadron Collider.

  6. Educating and Training Accelerator Scientists and Technologists for Tomorrow

    NASA Astrophysics Data System (ADS)

    Barletta, William; Chattopadhyay, Swapan; Seryi, Andrei

    2012-01-01

    Accelerator science and technology is inherently an integrative discipline that combines aspects of physics, computational science, electrical and mechanical engineering. As few universities offer full academic programs, the education of accelerator physicists and engineers for the future has primarily relied on a combination of on-the-job training supplemented with intensive courses at regional accelerator schools. This article describes the approaches being used to satisfy the educational curiosity of a growing number of interested physicists and engineers.

  7. Educating and Training Accelerator Scientists and Technologists for Tomorrow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barletta, William A.; Chattopadhyay, Swapan; Seryi, Andrei

    2012-07-01

    Accelerator science and technology is inherently an integrative discipline that combines aspects of physics, computational science, electrical and mechanical engineering. As few universities offer full academic programs, the education of accelerator physicists and engineers for the future has primarily relied on a combination of on-the-job training supplemented with intense courses at regional accelerator schools. This paper describes the approaches being used to satisfy the educational interests of a growing number of interested physicists and engineers.

  8. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  9. Graduate Student Program in Materials and Engineering Research and Development for Future Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Linda

    The objective of the proposal was to develop graduate student training in materials and engineering research relevant to the development of particle accelerators. Many components used in today's accelerators or storage rings are at the limit of performance. The path forward in many cases requires the development of new materials or fabrication techniques, or a novel engineering approach. Often, accelerator-based laboratories find it difficult to get top-level engineers or materials experts with the motivation to work on these problems. The three years of funding provided by this grant was used to support development of accelerator components through a multidisciplinary approachmore » that cut across the disciplinary boundaries of accelerator physics, materials science, and surface chemistry. The following results were achieved: (1) significant scientific results on fabrication of novel photocathodes, (2) application of surface science and superconducting materials expertise to accelerator problems through faculty involvement, (3) development of instrumentation for fabrication and characterization of materials for accelerator components, (4) student involvement with problems at the interface of material science and accelerator physics.« less

  10. Exploring phase space using smartphone acceleration and rotation sensors simultaneously

    NASA Astrophysics Data System (ADS)

    Monteiro, Martín; Cabeza, Cecilia; Martí, Arturo C.

    2014-07-01

    A paradigmatic physical system as the physical pendulum is experimentally studied using the acceleration and rotation (gyroscope) sensors available on smartphones and other devices such as iPads and tablets. A smartphone is fixed to the outside of a bicycle wheel whose axis is kept horizontal and fixed. The compound system, wheel plus smartphone, defines a physical pendulum which can rotate, giving full turns in one direction, or oscillate about the equilibrium position (performing either small or large oscillations). Measurements of the radial and tangential acceleration and the angular velocity obtained with smartphone sensors allow a deep insight into the dynamics of the system to be gained. In addition, thanks to the simultaneous use of the acceleration and rotation sensors, trajectories in the phase space are directly obtained. The coherence of the measures obtained with the different sensors and by traditional methods is remarkable. Indeed, due to their low cost and increasing availability, smartphone sensors are valuable tools that can be used in most undergraduate laboratories.

  11. Accelerator science and technology in Europe 2008-2017

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-10-01

    European Framework Research Projects have recently added a lot of meaning to the building process of the ERA - the European Research Area. Inside this, the accelerator technology plays an essential role. Accelerator technology includes large infrastructure and intelligent, modern instrumentation embracing mechatronics, electronics, photonics and ICT. During the realization of the European research and infrastructure project FP6 CARE 2004-2008 (Coordinated Accelerator Research in Europe), concerning the development of large accelerator infrastructure in Europe, it was decided that a scientific editorial series of peer-reviewed monographs from this research area will be published in close relation with the projects. It was a completely new and quite brave idea to combine a kind of a strictly research publisher with a transient project, lasting only four or five years. Till then nobody did something like that. The idea turned out to be a real success. The publications now known and valued in the accelerator world, as the (CERN-WUT) Editorial Series on Accelerator Science and Technology, is successfully continued in already the third European project EuCARD2 and has logistic guarantees, for the moment, till the 2017, when it will mature to its first decade. During the realization of the European projects EuCARD (European Coordination for Accelerator R&D 2009-2013 and TIARA (Test Infrastructure of Accelerator Research Area in Europe) there were published 18 volumes in this series. The ambitious plans for the nearest years is to publish, hopefully, a few tens of new volumes. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, published in the monographs of the European Framework Projects (FP) on accelerator technology. The succession of CARE, Eu

  12. Transversal Clifford gates on folded surface codes

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-12

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less

  13. Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    NASA Astrophysics Data System (ADS)

    Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan

    2014-03-01

    We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.

  14. Accelerator Science: Collider vs. Fixed Target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    Particle physics experiments employ high energy particle accelerators to make their measurements. However there are many kinds of particle accelerators with many interesting techniques. One important dichotomy is whether one takes a particle beam and have it hit a stationary target of atoms, or whether one takes two counter rotating beams of particles and smashes them together head on. In this video, Fermilab’s Dr. Don Lincoln explains the pros and cons of these two powerful methods of exploring the rules of the universe.

  15. Critical analysis of industrial electron accelerators

    NASA Astrophysics Data System (ADS)

    Korenev, S.

    2004-09-01

    The critical analysis of electron linacs for industrial applications (degradation of PTFE, curing of composites, modification of materials, sterlization and others) is considered in this report. Main physical requirements for industrial electron accelerators consist in the variations of beam parameters, such as kinetic energy and beam power. Questions for regulation of these beam parameters are considered. The level of absorbed dose in the irradiated product and throughput determines the main parameters of electron accelerator. The type of ideal electron linac for industrial applications is discussed.

  16. Accelerator Science: Collider vs. Fixed Target

    ScienceCinema

    Lincoln, Don

    2018-01-16

    Particle physics experiments employ high energy particle accelerators to make their measurements. However there are many kinds of particle accelerators with many interesting techniques. One important dichotomy is whether one takes a particle beam and have it hit a stationary target of atoms, or whether one takes two counter rotating beams of particles and smashes them together head on. In this video, Fermilab’s Dr. Don Lincoln explains the pros and cons of these two powerful methods of exploring the rules of the universe.

  17. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  18. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  19. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  20. Present and future prospects of accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Kutschera, Walter

    1988-05-01

    Accelerator mass spectrometry (AMS) has become a powerful technique for measuring extremely low abundances (10 -10 to 10 -15 relative to stable isotopes) of long-lived radioisotopes with half-lives in the range from 10 2 to 10 8 years. With a few exceptions, tandem accelerators turned out to be the most useful instruments for AMS measurements. Both natural (mostly cosmogenic) and manmade (anthropogenic) radioisotopes are studied with this technique. In some cases very low concentrations of stable isotopes are also measured. Applications of AMS cover a large variety of fields including anthropology, archaeology, oceanography, hydrology, climatology, volcanology, mineral exploration, cosmochemistry, meteoritics, glaciology, sedimentary processes, geochronology, environmental physics, astrophysics, nuclear and particle physics. Present and future prospects of AMS will be discussed as an interplay between the continuous development of new techniques and the investigation of problems in the above mentioned fields. Depending on the specific problem to be investigated, different aspects of an AMS system are of importance. Typical factors to be considered are energy range and type of accelerator, and the possibilities of dedicated versus partial use of new or existing accelerators.

  1. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  2. Explaining the Supernova Data Without Accelerating Expansion

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; McDevitt, T. J.; Silberstein, M.

    2012-10-01

    The 2011 Nobel Prize in Physics was awarded "for the discovery of the accelerating expansion of the universe through observations of distant supernovae." However, it is not the case that the type Ia supernova data necessitates accelerating expansion. Since we do not have a successful theory of quantum gravity, we should not assume general relativity (GR) will survive unification intact, especially on cosmological scales where tests are scarce. We provide a simple example of how GR cosmology may be modified to produce a decelerating Einstein-de Sitter cosmology (EdS) that accounts for the Union2 Compilation data as well as the accelerating ΛCDM (EdS plus a cosmological constant).

  3. Nonlinear theory of diffusive acceleration of particles by shock waves

    NASA Astrophysics Data System (ADS)

    Malkov, M. A.; Drury, L. O'C.

    2001-04-01

    Among the various acceleration mechanisms which have been suggested as responsible for the nonthermal particle spectra and associated radiation observed in many astrophysical and space physics environments, diffusive shock acceleration appears to be the most successful. We review the current theoretical understanding of this process, from the basic ideas of how a shock energizes a few reactionless particles to the advanced nonlinear approaches treating the shock and accelerated particles as a symbiotic self-organizing system. By means of direct solution of the nonlinear problem we set the limit to the test-particle approximation and demonstrate the fundamental role of nonlinearity in shocks of astrophysical size and lifetime. We study the bifurcation of this system, proceeding from the hydrodynamic to kinetic description under a realistic condition of Bohm diffusivity. We emphasize the importance of collective plasma phenomena for the global flow structure and acceleration efficiency by considering the injection process, an initial stage of acceleration and, the related aspects of the physics of collisionless shocks. We calculate the injection rate for different shock parameters and different species. This, together with differential acceleration resulting from nonlinear large-scale modification, determines the chemical composition of accelerated particles. The review concentrates on theoretical and analytical aspects but our strategic goal is to link the fundamental theoretical ideas with the rapidly growing wealth of observational data.

  4. Accelerator mass spectrometry for measurement of long-lived radioisotopes.

    PubMed

    Elmore, D; Phillips, F M

    1987-05-01

    Particle accelerators, such as those built for research in nuclear physics, can also be used together with magnetic and electrostatic mass analyzers to measure rare isotopes at very low abundance ratios. All molecular ions can be eliminated when accelerated to energies of millions of electron volts. Some atomic isobars can be eliminated with the use of negative ions; others can be separated at high energies by measuring their rate of energy loss in a detector. The long-lived radioisotopes (10)Be, (14)C,(26)A1, 36Cl, and (129)1 can now be measured in small natural samples having isotopic abundances in the range 10(-12) to 10(- 5) and as few as 10(5) atoms. In the past few years, research applications of accelerator mass spectrometry have been concentrated in the earth sciences (climatology, cosmochemistry, environmental chemistry, geochronology, glaciology, hydrology, igneous petrogenesis, minerals exploration, sedimentology, and volcanology), in anthropology and archeology (radiocarbon dating), and in physics (searches for exotic particles and measurement of halflives). In addition, accelerator mass spectrometry may become an important tool for the materials and biological sciences.

  5. Accelerator Mass Spectrometry for Measurement of Long-Lived Radioisotopes

    NASA Astrophysics Data System (ADS)

    Elmore, David; Phillips, Fred M.

    1987-05-01

    Particle accelerators, such as those built for research in nuclear physics, can also be used together with magnetic and electrostatic mass analyzers to measure rare isotopes at very low abundance ratios. All molecular ions can be eliminated when accelerated to energies of millions of electron volts. Some atomic isobars can be eliminated with the use of negative ions; others can be separated at high energies by measuring their rate of energy loss in a detector. The long-lived radioisotopes 10Be, 14C, 26Al, 36Cl, and 129I can now be measured in small natural samples having isotopic abundances in the range 10-12 to 10-15 and as few as 105 atoms. In the past few years, research applications of accelerator mass spectrometry have been concentrated in the earth sciences (climatology, cosmochemistry, environmental chemistry, geochronology, glaciology, hydrology, igneous petrogenesis, minerals exploration, sedimentology, and volcanology), in anthropology and archeology (radiocarbon dating), and in physics (searches for exotic particles and measurement of half-lives). In addition, accelerator mass spectrometry may become an important tool for the materials and biological sciences.

  6. Impact accelerations

    NASA Technical Reports Server (NTRS)

    Vongierke, H. E.; Brinkley, J. W.

    1975-01-01

    The degree to which impact acceleration is an important factor in space flight environments depends primarily upon the technology of capsule landing deceleration and the weight permissible for the associated hardware: parachutes or deceleration rockets, inflatable air bags, or other impact attenuation systems. The problem most specific to space medicine is the potential change of impact tolerance due to reduced bone mass and muscle strength caused by prolonged weightlessness and physical inactivity. Impact hazards, tolerance limits, and human impact tolerance related to space missions are described.

  7. New estimation method of neutron skyshine for a high-energy particle accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook

    2016-09-01

    A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.

  8. High Intensity Proton Accelerator Project in Japan (J-PARC).

    PubMed

    Tanaka, Shun-ichi

    2005-01-01

    The High Intensity Proton Accelerator Project, named as J-PARC, was started on 1 April 2001 at Tokai-site of JAERI. The accelerator complex of J-PARC consists of three accelerators: 400 MeV Linac, 3 GeV rapid cycle synchrotron and 50 GeV synchrotron; and four major experimental facilities: Material and Life Science Facility, Nuclear and Particle Physics Facility, Nuclear Transmutation Experiment Facility and Neutrino Facility. The outline of the J-PARC is presented with the current status of construction.

  9. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  10. EuCARD 2010: European coordination of accelerator research and development

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2010-09-01

    Accelerators are basic tools of the experimental physics of elementary particles, nuclear physics, light sources of the fourth generation. They are also used in myriad other applications in research, industry and medicine. For example, there are intensely developed transmutation techniques for nuclear waste from nuclear power and atomic industries. The European Union invests in the development of accelerator infrastructures inside the framework programs to build the European Research Area. The aim is to build new accelerator research infrastructures, develop the existing ones, and generally make the infrastructures more available to competent users. The paper summarizes the first year of activities of the EU FP7 Project Capacities EuCARD -European Coordination of Accelerator R&D. EuCARD is a common venture of 37 European Accelerator Laboratories, Institutes, Universities and Industrial Partners involved in accelerator sciences and technologies. The project, initiated by ESGARD, is an Integrating Activity co-funded by the European Commission under Framework Program 7 - Capacities for a duration of four years, starting April 1st, 2009. Several teams from this country participate actively in this project. The contribution from Polish research teams concerns: photonic and electronic measurement - control systems, RF-gun co-design, thin-film superconducting technology, superconducting transport infrastructures, photon and particle beam measurements and control.

  11. Ion Acceleration by Double Layers with Multi-Component Ion Species

    NASA Astrophysics Data System (ADS)

    Good, Timothy; Aguirre, Evan; Scime, Earl; West Virginia University Team

    2017-10-01

    Current-free double layers (CFDL) models have been proposed to explain observations of magnetic field-aligned ion acceleration in plasmas expanding into divergent magnetic field regions. More recently, experimental studies of the Bohm sheath criterion in multiple ion species plasma reveal an equilibration of Bohm speeds at the sheath-presheath boundary for a grounded plate in a multipole-confined filament discharge. We aim to test this ion velocity effect for CFDL acceleration. We report high resolution ion velocity distribution function (IVDF) measurements using laser induced fluorescence downstream of a CFDL in a helicon plasma. Combinations of argon-helium, argon-krypton, and argon-xenon gases are ionized and measurements of argon or xenon IVDFs are investigated to determine whether ion acceleration is enhanced (or diminished) by the presence of lighter (or heavier) ions in the mix. We find that the predominant effect is a reduction of ion acceleration consistent with increased drag arising from increased gas pressure under all conditions, including constant total gas pressure, equal plasma densities of different ions, and very different plasma densities of different ions. These results suggest that the physics responsible for acceleration of multiple ion species in simple sheaths is not responsible for the ion acceleration observed in these expanding plasmas. Department of Physics, Gettysburg College.

  12. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less

  13. Analysis of GEANT4 Physics List Properties in the 12 GeV MOLLER Simulation Framework

    NASA Astrophysics Data System (ADS)

    Haufe, Christopher; Moller Collaboration

    2013-10-01

    To determine the validity of new physics beyond the scope of the electroweak theory, nuclear physicists across the globe have been collaborating on future endeavors that will provide the precision needed to confirm these speculations. One of these is the MOLLER experiment - a low-energy particle experiment that will utilize the 12 GeV upgrade of Jefferson Lab's CEBAF accelerator. The motivation of this experiment is to measure the parity-violating asymmetry of scattered polarized electrons off unpolarized electrons in a liquid hydrogen target. This measurement would allow for a more precise determination of the electron's weak charge and weak mixing angle. While still in its planning stages, the MOLLER experiment requires a detailed simulation framework in order to determine how the project should be run in the future. The simulation framework for MOLLER, called ``remoll'', is written in GEANT4 code. As a result, the simulation can utilize a number of GEANT4 coded physics lists that provide the simulation with a number of particle interaction constraints based off of different particle physics models. By comparing these lists with one another using the data-analysis application ROOT, the most optimal physics list for the MOLLER simulation can be determined and implemented. This material is based upon work supported by the National Science Foundation under Grant No. 714001.

  14. Additions and improvements to the high energy density physics capabilities in the FLASH code

    NASA Astrophysics Data System (ADS)

    Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.

    2017-10-01

    FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.

  15. Analysis of secondary particle behavior in multiaperture, multigrid accelerator for the ITER neutral beam injector.

    PubMed

    Mizuno, T; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Watanabe, K; Dairaku, M; Sakamoto, K; Inoue, T

    2010-02-01

    Heat load on acceleration grids by secondary particles such as electrons, neutrals, and positive ions, is a key issue for long pulse acceleration of negative ion beams. Complicated behaviors of the secondary particles in multiaperture, multigrid (MAMuG) accelerator have been analyzed using electrostatic accelerator Monte Carlo code. The analytical result is compared to experimental one obtained in a long pulse operation of a MeV accelerator, of which second acceleration grid (A2G) was removed for simplification of structure. The analytical results show that relatively high heat load on the third acceleration grid (A3G) since stripped electrons were deposited mainly on A3G. This heat load on the A3G can be suppressed by installing the A2G. Thus, capability of MAMuG accelerator is demonstrated for suppression of heat load due to secondary particles by the intermediate grids.

  16. Symplectic orbit and spin tracking code for all-electric storage rings

    DOE PAGES

    Talman, Richard M.; Talman, John D.

    2015-07-22

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit

  17. MODTRAN6: a major upgrade of the MODTRAN radiative transfer code

    NASA Astrophysics Data System (ADS)

    Berk, Alexander; Conforti, Patrick; Kennett, Rosemary; Perkins, Timothy; Hawes, Frederick; van den Bosch, Jeannette

    2014-06-01

    The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. This version of the code incorporates modern software ar- chitecture including an application programming interface, enhanced physics features including a line-by-line algorithm, a supplementary physics toolkit, and new documentation. The application programming interface has been developed for ease of integration into user applications. The MODTRAN code has been restructured towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.

  18. Ultra-high gradient channeling acceleration in nanostructures: Design/progress of proof-of-concept (POC) experiments

    NASA Astrophysics Data System (ADS)

    Shin, Y. M.; Green, A.; Lumpkin, A. H.; Thurman-Keup, R. M.; Shiltsev, V.; Zhang, X.; Farinella, D. M.-A.; Taborek, P.; Tajima, T.; Wheeler, J. A.; Mourou, G.

    2017-03-01

    A short bunch of relativistic particles, or a short-pulse laser, perturb the density state of conduction electrons in a solid crystal and excite wakefields along atomic lattices in a crystal. Under a coupling condition between a driver and plasma, the wakes, if excited, can accelerate channeling particles with TeV/m acceleration gradients [1], in principle, since the density of charge carriers (conduction electrons) in solids n0 = 1020 - 1023 cm-3 is significantly higher than what was considered above in gaseous plasma. Nanostructures have some advantages over crystals for channeling applications of high power beams. The de-channeling rate can be reduced and the beam acceptance increased by the large size of the channels. For beam-driven acceleration, a bunch length with a sufficient charge density would need to be in the range of the plasma wavelength to properly excite plasma wakefields, and channeled particle acceleration with the wakefields must occur before the ions in the lattices move beyond the restoring threshold. In the case of the excitation by short laser pulses, the dephasing length is appreciably increased with the larger channel, which enables channeled particles to gain sufficient amounts of energy. This paper describes simulation analyses on beam- and laser (X-ray)-driven accelerations in effective nanotube models obtained from the Vsim and EPOCH codes. Experimental setups to detect wakefields are also outlined with accelerator facilities at Fermilab and Northern Illinois University (NIU). In the FAST facility, the electron beamline was successfully commissioned at 50 MeV, and it is being upgraded toward higher energies for electron accelerator R&D. The 50 MeV injector beamline of the facility is used for X-ray crystal-channeling radiation with a diamond target. It has been proposed to utilize the same diamond crystal for a channeling acceleration proof-of-concept (POC). Another POC experiment is also designed for the NIU accelerator lab with time

  19. The HL-LHC Accelerator Physics Challenges

    NASA Astrophysics Data System (ADS)

    Fartoukh, S.; Zimmermann, F.

    The conceptual baseline of the HL-LHC project is reviewed, putting into perspective the main beam physics challenges of this new collider in comparison with the existing LHC, and the series of solutions and possible mitigation measures presently envisaged.

  20. Educational activities with a tandem accelerator

    NASA Astrophysics Data System (ADS)

    Casolaro, P.; Campajola, L.; Balzano, E.; D'Ambrosio, E.; Figari, R.; Vardaci, E.; La Rana, G.

    2018-05-01

    Selected experiments in fundamental physics have been proposed for many years at the Tandem Accelerator of the University of Napoli ‘Federico II’s Department of Physics as a part of a one-semester laboratory course for graduate students. The aim of this paper is to highlight the educational value of the experimental realization of the nuclear reaction 19F(p,α)16O. With the purpose of verifying the mass-energy equivalence principle, different aspects of both classical and modern physics can be investigated, e.g. conservation laws, atomic models, nuclear physics applications to compositional analysis, nuclear cross-section, Q-value and nuclear spectroscopic analysis.

  1. Plasma Wake-field Acceleration in the Blow-out Regime

    NASA Astrophysics Data System (ADS)

    Barov, Nikolai; Rosenzweig, James

    1999-11-01

    Recent experiments at Argonne National Laboratory, investigating the blow-out regime of the plasma wake-field accelerator, are discussed. These experiments achieved stable underdense (beam denser than the ambient plasma density) beam transport, and measured average acceleration of 25 MV/m, corresponding to peak wave fields of over 60 MVm. A comparison of the results to simulation is given, and the physics of the system is discussed. Potential for improvements in performance and achieved acceleration gradient, as well as accelerated beam quality are examined within the context of the next generation of experiments at the Fermilab Test Facility. The status of these experiments will be given.

  2. SimTrack: A compact c++ library for particle orbit and spin tracking in accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yun

    2015-06-24

    SimTrack is a compact c++ library of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam-beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam-beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam-beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  3. Jerome Lewis Duggan: A Nuclear Physicist and a Well-Known, Six-Decade Accelerator Application Conference (CAARI) Organizer

    NASA Astrophysics Data System (ADS)

    Del McDaniel, Floyd; Doyle, Barney L.

    Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry’s physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator

  4. Jerome Lewis Duggan: A Nuclear Physicist and a Well-Known, Six-Decade Accelerator Application Conference (CAARI) Organizer

    NASA Astrophysics Data System (ADS)

    Del McDaniel, Floyd; Doyle, Barney L.

    Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry's physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator

  5. Enhancement of the Accelerating Gradient in Superconducting Microwave Resonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Checchin, Mattia; Grassellino, Anna; Martinello, Martina

    2017-05-01

    The accelerating gradient of superconducting resonators can be enhanced by engineering the thickness of a dirty layer grown at the cavity's rf surface. In this paper the description of the physics behind the accelerating gradient enhancement by meaning of the dirty layer is carried out by solving numerically the the Ginzburg-Landau (GL) equations for the layered system. The calculation shows that the presence of the dirty layer stabilizes the Meissner state up to the lower critical field of the bulk, increasing the maximum accelerating gradient.

  6. Standardized Definitions for Code Verification Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  7. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    NASA Astrophysics Data System (ADS)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and nonlinear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. We link these discrepancies to the superparticle number per cell and to the level of field fluctuations. Second, we detail a new method for initial loading of Maxwell-Jüttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. The formulation of the relativistic Harris equilibrium is generalized to arbitrary temperature and mass ratios. Both are required for the tearing instability setup. Third, we turn to the key point of this paper and scrutinize the question of what description of (weakly coupled) physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse-graining, i.e., grouping of the order of p ~ 1010 real particles into a single computer superparticle, and field storage on a grid with its subsequent finite superparticle size. We introduce the notion of coarse-graining dependent quantities, i.e., quantities depending on p. They derive from the PIC plasma parameter ΛPIC, which we show to behave as ΛPIC ∝ 1/p. We explore two important implications. One is that PIC collision- and fluctuation-induced thermalization times are expected to scale with the number of superparticles per grid cell, and thus to be a factor p ~ 1010 smaller than in real plasmas, a fact that we confirm with

  8. Review of EuCARD project on accelerator infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-01-01

    The aim of big infrastructural and research programs (like pan-European Framework Programs) and individual projects realized inside these programs in Europe is to structure the European Research Area - ERA in this way as to be competitive with the leaders of the world. One of this projects in EuCARD (European Coordination of Accelerator Research and Development) with the aim to structure and modernize accelerator, (including accelerators for big free electron laser machines) research infrastructure. This article presents the periodic development of EuCARD which took place between the annual meeting, April 2012 in Warsaw and SC meeting in Uppsala, December 2012. The background of all these efforts are achievements of the LHC machine and associated detectors in the race for new physics. The LHC machine works in the regime of p-p, Pb-p, Pb-Pb (protons and lead ions). Recently, a discovery by the LHC of Higgs like boson, has started vivid debates on the further potential of this machine and the future. The periodic EuCARD conference, workshop and meetings concern building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution. The aim of the discussion is not only summarize the current status but make plans and prepare practically to building new infrastructures. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. Accelerator technology is intensely developed in all developed nations and regions of the world. The EuCARD project contains a lot of subjects related directly and indirectly to photon

  9. Braiding by Majorana tracking and long-range CNOT gates with color codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2017-11-01

    Color-code quantum computation seamlessly combines Majorana-based hardware with topological error correction. Specifically, as Clifford gates are transversal in two-dimensional color codes, they enable the use of the Majoranas' non-Abelian statistics for gate operations at the code level. Here, we discuss the implementation of color codes in arrays of Majorana nanowires that avoid branched networks such as T junctions, thereby simplifying their realization. We show that, in such implementations, non-Abelian statistics can be exploited without ever performing physical braiding operations. Physical braiding operations are replaced by Majorana tracking, an entirely software-based protocol which appropriately updates the Majoranas involved in the color-code stabilizer measurements. This approach minimizes the required hardware operations for single-qubit Clifford gates. For Clifford completeness, we combine color codes with surface codes, and use color-to-surface-code lattice surgery for long-range multitarget CNOT gates which have a time overhead that grows only logarithmically with the physical distance separating control and target qubits. With the addition of magic state distillation, our architecture describes a fault-tolerant universal quantum computer in systems such as networks of tetrons, hexons, or Majorana box qubits, but can also be applied to nontopological qubit platforms.

  10. The GBS code for tokamak scrape-off layer simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halpern, F.D., E-mail: federico.halpern@epfl.ch; Ricci, P.; Jolliet, S.

    2016-06-15

    We describe a new version of GBS, a 3D global, flux-driven plasma turbulence code to simulate the turbulent dynamics in the tokamak scrape-off layer (SOL), superseding the code presented by Ricci et al. (2012) [14]. The present work is driven by the objective of studying SOL turbulent dynamics in medium size tokamaks and beyond with a high-fidelity physics model. We emphasize an intertwining framework of improved physics models and the computational improvements that allow them. The model extensions include neutral atom physics, finite ion temperature, the addition of a closed field line region, and a non-Boussinesq treatment of the polarizationmore » drift. GBS has been completely refactored with the introduction of a 3-D Cartesian communicator and a scalable parallel multigrid solver. We report dramatically enhanced parallel scalability, with the possibility of treating electromagnetic fluctuations very efficiently. The method of manufactured solutions as a verification process has been carried out for this new code version, demonstrating the correct implementation of the physical model.« less

  11. Geometric integration for particle accelerators

    NASA Astrophysics Data System (ADS)

    Forest, Étienne

    2006-05-01

    This paper is a very personal view of the field of geometric integration in accelerator physics—a field where often work of the highest quality is buried in lost technical notes or even not published; one has only to think of Simon van der Meer Nobel prize work on stochastic cooling—unpublished in any refereed journal. So I reconstructed the relevant history of geometrical integration in accelerator physics as much as I could by talking to collaborators and using my own understanding of the field. The reader should not be too surprised if this account is somewhere between history, science and perhaps even fiction.

  12. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  13. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs

  14. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  15. Embedded Streaming Deep Neural Networks Accelerator With Applications.

    PubMed

    Dundar, Aysegul; Jin, Jonghoon; Martini, Berin; Culurciello, Eugenio

    2017-07-01

    Deep convolutional neural networks (DCNNs) have become a very powerful tool in visual perception. DCNNs have applications in autonomous robots, security systems, mobile phones, and automobiles, where high throughput of the feedforward evaluation phase and power efficiency are important. Because of this increased usage, many field-programmable gate array (FPGA)-based accelerators have been proposed. In this paper, we present an optimized streaming method for DCNNs' hardware accelerator on an embedded platform. The streaming method acts as a compiler, transforming a high-level representation of DCNNs into operation codes to execute applications in a hardware accelerator. The proposed method utilizes maximum computational resources available based on a novel-scheduled routing topology that combines data reuse and data concatenation. It is tested with a hardware accelerator implemented on the Xilinx Kintex-7 XC7K325T FPGA. The system fully explores weight-level and node-level parallelizations of DCNNs and achieves a peak performance of 247 G-ops while consuming less than 4 W of power. We test our system with applications on object classification and object detection in real-world scenarios. Our results indicate high-performance efficiency, outperforming all other presented platforms while running these applications.

  16. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  17. High spatial resolution measurements in a single stage ram accelerator

    NASA Technical Reports Server (NTRS)

    Hinkey, J. B.; Burnham, E. A.; Bruckner, A. P.

    1992-01-01

    High spatial resolution experimental tube wall pressure measurements of ram accelerator gas dynamic phenomena are presented in this paper. The ram accelerator is a ramjet-in-tube device which operates in a manner similar to that of a conventional ramjet. The projectile resembles the centerbody of a ramjet and travels supersonically through a tube filled with a combustible gaseous mixture, with the tube acting as the outer cowling. Pressure data are recorded as the projectile passes by sensors mounted in the tube wall at various locations along the tube. Utilization of special highly instrumented sections of tube has allowed the recording of gas dynamic phenomena with high resolution. High spatial resolution tube wall pressure data from the three regimes of propulsion studied to date (subdetonative, transdetonative, and superdetonative) in a single stage gas mixture are presented and reveal the three-dimensional character of the flow field induced by projectile fins and the canting of the fins and the canting of the projectile body relative to the tube wall. Also presented for comparison to the experimental data are calculations made with an inviscid, three-dimensional CFD code. The knowledge gained from these experiments and simulations is useful in understanding the underlying nature of ram accelerator propulsive regimes, as well as assisting in the validation of three-dimensional CFD coded which model unsteady, chemically reactive flows.

  18. Thermo-magnetic instabilities in Nb 3Sn superconducting accelerator magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordini, Bernardo

    2006-09-01

    The advance of High Energy Physics research using circulating accelerators strongly depends on increasing the magnetic bending field which accelerator magnets provide. To achieve high fields, the most powerful present-day accelerator magnets employ NbTi superconducting technology; however, with the start up of Large Hadron Collider (LHC) in 2007, NbTi magnets will have reached the maximum field allowed by the intrinsic properties of this superconductor. A further increase of the field strength necessarily requires a change in superconductor material; the best candidate is Nb 3Sn. Several laboratories in the US and Europe are currently working on developing Nb 3Sn accelerator magnets,more » and although these magnets have great potential, it is suspected that their performance may be fundamentally limited by conductor thermo-magnetic instabilities: an idea first proposed by the Fermilab High Field Magnet group early in 2003. This thesis presents a study of thermo-magnetic instability in high field Nb 3Sn accelerator magnets. In this chapter the following topics are described: the role of superconducting magnets in High Energy Physics; the main characteristics of superconductors for accelerator magnets; typical measurements of current capability in superconducting strands; the properties of Nb 3Sn; a description of the manufacturing process of Nb 3Sn strands; superconducting cables; a typical layout of superconducting accelerator magnets; the current state of the art of Nb 3Sn accelerator magnets; the High Field Magnet program at Fermilab; and the scope of the thesis.« less

  19. EDITORIAL: Laser and Plasma Accelerators Workshop, Kardamyli, Greece, 2009 Laser and Plasma Accelerators Workshop, Kardamyli, Greece, 2009

    NASA Astrophysics Data System (ADS)

    Bingham, Bob; Muggli, Patric

    2011-01-01

    The Laser and Plasma Accelerators Workshop 2009 was part of a very successful series of international workshops which were conceived at the 1985 Laser Acceleration of Particles Workshop in Malibu, California. Since its inception, the workshop has been held in Asia and in Europe (Kardamyli, Kyoto, Presqu'ile de Giens, Portovenere, Taipei and the Azores). The purpose of the workshops is to bring together the most recent results in laser wakefield acceleration, plasma wakefield acceleration, laser-driven ion acceleration, and radiation generation produced by plasma-based accelerator beams. The 2009 workshop was held on 22-26 June in Kardamyli, Greece, and brought together over 80 participants. (http://cfp.ist.utl.pt/lpaw09/). The workshop involved five main themes: • Laser plasma electron acceleration (experiment/theory/simulation) • Computational methods • Plasma wakefield acceleration (experiment/theory/simulation) • Laser-driven ion acceleration • Radiation generation and application. All of these themes are covered in this special issue of Plasma Physics and Controlled Fusion. The topic and application of plasma accelerators is one of the success stories in plasma physics, with laser wakefield acceleration of mono-energetic electrons to GeV energies, of ions to hundreds of MeV, and electron-beam-driven wakefield acceleration to 85 GeV. The accelerating electric field in the wake is of the order 1 GeV cm-1, or an accelerating gradient 1000 times greater than in conventional accelerators, possibly leading to an accelerator 1000 times smaller (and much more affordable) for the same energy. At the same time, the electron beams generated by laser wakefield accelerators have very good emittance with a correspondingly good energy spread of about a few percent. They also have the unique feature in being ultra-short in the femtosecond scale. This makes them attractive for a variety of applications, ranging from material science to ultra-fast time

  20. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  1. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    NASA Astrophysics Data System (ADS)

    Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter

    2012-08-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.

  2. Which Accelerates Faster--A Falling Ball or a Porsche?

    ERIC Educational Resources Information Center

    Rall, James D.; Abdul-Razzaq, Wathiq

    2012-01-01

    An introductory physics experiment has been developed to address the issues seen in conventional physics lab classes including assumption verification, technological dependencies, and real world motivation for the experiment. The experiment has little technology dependence and compares the acceleration due to gravity by using position versus time…

  3. Advanced Accelerator Development Strategy Report: DOE Advanced Accelerator Concepts Research Roadmap Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Over a full two day period, February 2–3, 2016, the Office of High Energy Physics convened a workshop in Gaithersburg, MD to seek community input on development of an Advanced Accelerator Concepts (AAC) research roadmap. The workshop was in response to a recommendation by the HEPAP Accelerator R&D Subpanel [1] [2] to “convene the university and laboratory proponents of advanced acceleration concepts to develop R&D roadmaps with a series of milestones and common down selection criteria towards the goal for constructing a multi-TeV e+e– collider” (the charge to the workshop can be found in Appendix A). During the workshop, proponentsmore » of laser-driven plasma wakefield acceleration (LWFA), particle-beam-driven plasma wakefield acceleration (PWFA), and dielectric wakefield acceleration (DWFA), along with a limited number of invited university and laboratory experts, presented and critically discussed individual concept roadmaps. The roadmap workshop was preceded by several preparatory workshops. The first day of the workshop featured presentation of three initial individual roadmaps with ample time for discussion. The individual roadmaps covered a time period extending until roughly 2040, with the end date assumed to be roughly appropriate for initial operation of a multi-TeV e+e– collider. The second day of the workshop comprised talks on synergies between the roadmaps and with global efforts, potential early applications, diagnostics needs, simulation needs, and beam issues and challenges related to a collider. During the last half of the day the roadmaps were revisited but with emphasis on the next five to ten years (as specifically requested in the charge) and on common challenges. The workshop concluded with critical and unanimous endorsement of the individual roadmaps and an extended discussion on the characteristics of the common challenges. (For the agenda and list of participants see Appendix B.)« less

  4. Study of the transverse beam motion in the DARHT Phase II accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan; Fawley, W M; Houck, T L

    1998-08-20

    The accelerator for the second-axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will accelerate a 4-kA, 3-MeV, 2--µs long electron current pulse to 20 MeV. The energy variation of the beam within the flat-top portion of the current pulse is (plus or equal to) 0.5%. The performance of the DARHT Phase II radiographic machine requires the transverse beam motion to be much less than the beam spot size which is about 1.5 mm diameter on the x-ray converter. In general, the leading causes of the transverse beam motion in an accelerator are the beam breakup instability (BBU) andmore » the corkscrew motion. We have modeled the transverse beam motion in the DARHT Phase II accelerator with various magnetic tunes and accelerator cell configurations by using the BREAKUP code. The predicted sensitivity of corkscrew motion and BBU growth to different tuning algorithms will be presented.« less

  5. Sandia National Laboratories analysis code data base

    NASA Astrophysics Data System (ADS)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  6. Self-accelerating warped braneworlds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carena, Marcela; Lykken, Joseph; Santiago, Jose

    2007-01-15

    Braneworld models with induced gravity have the potential to replace dark energy as the explanation for the current accelerating expansion of the Universe. The original model of Dvali, Gabadadze, and Porrati (DGP) demonstrated the existence of a 'self-accelerating' branch of background solutions, but suffered from the presence of ghosts. We present a new large class of braneworld models which generalize the DGP model. Our models have negative curvature in the bulk, allow a second brane, and have general brane tensions and localized curvature terms. We exhibit three different kinds of ghosts, associated to the graviton zero mode, the radion, andmore » the longitudinal components of massive graviton modes. The latter two species occur in the DGP model, for negative and positive brane tension, respectively. In our models, we find that the two kinds of DGP ghosts are tightly correlated with each other, but are not always linked to the feature of self-acceleration. Our models are a promising laboratory for understanding the origins and physical meaning of braneworld ghosts, and perhaps for eliminating them altogether.« less

  7. Self-accelerating Warped Braneworlds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carena, Marcela; Lykken, Joseph; /Fermilab

    2006-11-01

    Braneworld models with induced gravity have the potential to replace dark energy as the explanation for the current accelerating expansion of the Universe. The original model of Dvali, Gabadadze and Porrati (DGP) demonstrated the existence of a ''self-accelerating'' branch of background solutions, but suffered from the presence of ghosts. We present a new large class of braneworld models which generalize the DGP model. Our models have negative curvature in the bulk, allow a second brane, and have general brane tensions and localized curvature terms. We exhibit three different kinds of ghosts, associated to the graviton zero mode, the radion, andmore » the longitudinal components of massive graviton modes. The latter two species occur in the DGP model, for negative and positive brane tension respectively. In our models, we find that the two kinds of DGP ghosts are tightly correlated with each other, but are not always linked to the feature of self-acceleration. Our models are a promising laboratory for understanding the origins and physical meaning of braneworld ghosts, and perhaps for eliminating them altogether.« less

  8. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  9. Plasma Accelerators Race to 10 GeV and Beyond

    NASA Astrophysics Data System (ADS)

    Katsouleas, Tom

    2005-10-01

    This paper reviews the concepts, recent progress and current challenges for realizing the tremendous electric fields in relativistic plasma waves for applications ranging from tabletop particle accelerators to high-energy physics. Experiments in the 90's on laser-driven plasma wakefield accelerators at several laboratories around the world demonstrated the potential for plasma wakefields to accelerate intense bunches of self-trapped particles at rates as high as 100 GeV/m in mm-scale gas jets. These early experiments offered impressive gradients but large energy spread (100%) and short interaction lengths. Major breakthroughs have recently occurred on both fronts. Three groups (LBL-US, LOA-France and RAL-UK) have now entered a new regime of laser wakefield acceleration resulting in 100 MeV mono-energetic beams with up to nanoCoulombs of charge and very small angular spread. Simulations suggest that current lasers are just entering this new regime, and the scaling to higher energies appears attractive. In parallel with the progress in laser-driven wakefields, particle-beam driven wakefield accelerators are making large strides. A series of experiments using the 30 GeV beam of the Stanford Linear Accelerator Center (SLAC) has demonstrated high-gradient acceleration of electrons and positrons in meter-scale plasmas. The UCLA/USC/SLAC collaboration has accelerated electrons beyond 1 GeV and is aiming at 10 GeV in 30 cm as the next step toward a ``plasma afterburner,'' a concept for doubling the energy of a high-energy collider in a few tens of meters of plasma. In addition to wakefield acceleration, these and other experiments have demonstrated the rich physics bounty to be reaped from relativistic beam-plasma interactions. This includes plasma lenses capable of focusing particle beams to the highest density ever produced, collective radiation mechanisms capable of generating high-brightness x-ray beams, collective refraction of particles at a plasma interface, and

  10. Distribution of the background gas in the MITICA accelerator

    NASA Astrophysics Data System (ADS)

    Sartori, E.; Dal Bello, S.; Serianni, G.; Sonato, P.

    2013-02-01

    MITICA is the ITER neutral beam test facility to be built in Padova for the generation of a 40A D- ion beam with a 16×5×16 array of 1280 beamlets accelerated to 1MV. The background gas pressure distribution and the particle flows inside MITICA accelerator are critical aspects for stripping losses, generation of secondary particles and beam non-uniformities. To keep the stripping losses in the extraction and acceleration stages reasonably low, the source pressure should be 0.3 Pa or less. The gas flow in MITICA accelerator is being studied using a 3D Finite Element code, named Avocado. The gas-wall interaction model is based on the cosine law, and the whole vacuum system geometry is represented by a view factor matrix based on surface discretization and gas property definitions. Pressure distribution and mutual fluxes are then solved linearly. In this paper the result of a numerical simulation is presented, showing the steady-state pressure distribution inside the accelerator when gas enters the system at room temperature. The accelerator model is limited to a horizontal slice 400 mm high (1/4 of the accelerator height). The pressure profile at solid walls and through the beamlet axis is obtained, allowing the evaluation and the discussion of the background gas distribution and nonuniformity. The particle flux at the inlet and outlet boundaries (namely the grounded grid apertures and the lateral conductances respectively) will be discussed.

  11. Acceleration modules in linear induction accelerators

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Heng; Deng, Jian-Jun

    2014-05-01

    The Linear Induction Accelerator (LIA) is a unique type of accelerator that is capable of accelerating kilo-Ampere charged particle current to tens of MeV energy. The present development of LIA in MHz bursting mode and the successful application into a synchrotron have broadened LIA's usage scope. Although the transformer model is widely used to explain the acceleration mechanism of LIAs, it is not appropriate to consider the induction electric field as the field which accelerates charged particles for many modern LIAs. We have examined the transition of the magnetic cores' functions during the LIA acceleration modules' evolution, distinguished transformer type and transmission line type LIA acceleration modules, and re-considered several related issues based on transmission line type LIA acceleration module. This clarified understanding should help in the further development and design of LIA acceleration modules.

  12. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  13. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  14. Effects of energy chirp on bunch length measurement in linear accelerator beams

    NASA Astrophysics Data System (ADS)

    Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.

    2017-08-01

    The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.

  15. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, L.T.; Hertel, E.; Schwalbe, L.

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate.more » After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.« less

  16. Figuring the Acceleration of the Simple Pendulum

    NASA Astrophysics Data System (ADS)

    Lieberherr, Martin

    2011-12-01

    The centripetal acceleration has been known since Huygens' (1659) and Newton's (1684) time.1,2 The physics to calculate the acceleration of a simple pendulum has been around for more than 300 years, and a fairly complete treatise has been given by C. Schwarz in this journal.3 But sentences like "the acceleration is always directed towards the equilibrium position" beside the picture of a swing on a circular arc can still be found in textbooks, as e.g. in Ref. 4. Vectors have been invented by Grassmann (1844)5 and are conveniently used to describe the acceleration in curved orbits, but acceleration is more often treated as a scalar with or without sign, as the words acceleration/deceleration suggest. The component tangential to the orbit is enough to deduce the period of the simple pendulum, but it is not enough to discuss the forces on the pendulum, as has been pointed out by Santos-Benito and A. Gras-Marti.6 A suitable way to address this problem is a nice figure with a catch for classroom discussions or homework. When I plotted the acceleration vectors of the simple pendulum in their proper positions, pictures as in Fig. 1 appeared on the screen. The endpoints of the acceleration vectors, if properly scaled, seemed to lie on a curve with a familiar shape: a cardioid. Is this true or just an illusion?

  17. Physics through the 1990s: Elementary-particle physics

    NASA Astrophysics Data System (ADS)

    The volume begins with a non-mathematical discussion of the motivation behind, and basic ideas of, elementary-particle physics theory and experiment. The progress over the past two decades with the quark model and unification of the electromagnetic and weak interactions is reviewed. Existing theoretical problems in the field, such as the origin of mass and the unification of the fundamental forces, are detailed, along with experimental programs to test the new theories. Accelerators, instrumentation, and detectors are described for both current and future facilities. Interactions with other areas of both theoretical and applied physics are presented. The sociology of the field is examined regarding the education of graduate students, the organization necessary in large-scale experiments, and the decision-making process involved in high-cost experiments. Finally, conclusions and recommendations for maintaining US excellence in theory and experiment are given. Appendices list both current and planned accelerators, and present statistical data on the US elementary-particle physics program. A glossary is included.

  18. Physics through the 1990s: Elementary-particle physics

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The volume begins with a non-mathematical discussion of the motivation behind, and basic ideas of, elementary-particle physics theory and experiment. The progress over the past two decades with the quark model and unification of the electromagnetic and weak interactions is reviewed. Existing theoretical problems in the field, such as the origin of mass and the unification of the fundamental forces, are detailed, along with experimental programs to test the new theories. Accelerators, instrumentation, and detectors are described for both current and future facilities. Interactions with other areas of both theoretical and applied physics are presented. The sociology of the field is examined regarding the education of graduate students, the organization necessary in large-scale experiments, and the decision-making process involved in high-cost experiments. Finally, conclusions and recommendations for maintaining US excellence in theory and experiment are given. Appendices list both current and planned accelerators, and present statistical data on the US elementary-particle physics program. A glossary is included.

  19. Distribution uniformity of laser-accelerated proton beams

    NASA Astrophysics Data System (ADS)

    Zhu, Jun-Gao; Zhu, Kun; Tao, Li; Xu, Xiao-Han; Lin, Chen; Ma, Wen-Jun; Lu, Hai-Yang; Zhao, Yan-Ying; Lu, Yuan-Rong; Chen, Jia-Er; Yan, Xue-Qing

    2017-09-01

    Compared with conventional accelerators, laser plasma accelerators can generate high energy ions at a greatly reduced scale, due to their TV/m acceleration gradient. A compact laser plasma accelerator (CLAPA) has been built at the Institute of Heavy Ion Physics at Peking University. It will be used for applied research like biological irradiation, astrophysics simulations, etc. A beamline system with multiple quadrupoles and an analyzing magnet for laser-accelerated ions is proposed here. Since laser-accelerated ion beams have broad energy spectra and large angular divergence, the parameters (beam waist position in the Y direction, beam line layout, drift distance, magnet angles etc.) of the beamline system are carefully designed and optimised to obtain a radially symmetric proton distribution at the irradiation platform. Requirements of energy selection and differences in focusing or defocusing in application systems greatly influence the evolution of proton distributions. With optimal parameters, radially symmetric proton distributions can be achieved and protons with different energy spread within ±5% have similar transverse areas at the experiment target. Supported by National Natural Science Foundation of China (11575011, 61631001) and National Grand Instrument Project (2012YQ030142)

  20. Prototyping high-gradient mm-wave accelerating structures

    DOE PAGES

    Nanni, Emilio A.; Dolgashev, Valery A.; Haase, Andrew; ...

    2017-01-01

    We present single-cell accelerating structures designed for high-gradient testing at 110 GHz. The purpose of this work is to study the basic physics of ultrahigh vacuum RF breakdown in high-gradient RF accelerators. The accelerating structures are π-mode standing-wave cavities fed with a TM 01 circular waveguide. The structures are fabricated using precision milling out of two metal blocks, and the blocks are joined with diffusion bonding and brazing. The impact of fabrication and joining techniques on the cell geometry and RF performance will be discussed. First prototypes had a measured Q 0 of 2800, approaching the theoretical design value ofmore » 3300. The geometry of these accelerating structures are as close as practical to singlecell standing-wave X-band accelerating structures more than 40 of which were tested at SLAC. This wealth of X-band data will serve as a baseline for these 110 GHz tests. Furthermore, the structures will be powered with short pulses from a MW gyrotron oscillator. RF power of 1 MW may allow an accelerating gradient of 400 MeV/m to be reached.« less