Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
Summary Report of Working Group 2: Computation
NASA Astrophysics Data System (ADS)
Stoltz, P. H.; Tsung, R. S.
2009-01-01
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.
Summary Report of Working Group 2: Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltz, P. H.; Tsung, R. S.
2009-01-22
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less
Griffiths, Malcolm; Walters, L.; Greenwood, L. R.; ...
2017-09-21
The original article addresses the opportunities and complexities of using materials test reactors with high neutron fluxes to perform accelerated studies of material aging in power reactors operating at lower neutron fluxes and with different neutron flux spectra. Radiation damage and gas production in different reactors have been compared using the code, SPECTER. This code provides a common standard from which to compare neutron damage data generated by different research groups using a variety of reactors. This Corrigendum identifies a few typographical errors. Tables 2 and 3 are included in revised form.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
The Particle Accelerator Simulation Code PyORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M
2015-01-01
The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
1984-08-01
COLLFCTIVF PAPTTCLE ACCELERATOR VIA NUMERICAL MODFLINC WITH THF MAGIC CODE Robert 1. Darker Auqust 19F4 Final Report for Period I April. qI84 - 30...NUMERICAL MODELING WITH THE MAGIC CODE Robert 3. Barker August 1984 Final Report for Period 1 April 1984 - 30 September 1984 Prepared for: Scientific...Collective Final Report Particle Accelerator VIA Numerical Modeling with April 1 - September-30, 1984 MAGIC Code. 6. PERFORMING ORG. REPORT NUMBER MRC/WDC-R
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott
2012-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
Development of Maximum Considered Earthquake Ground Motion Maps
Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.
2000-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.
3D unstructured-mesh radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
NASA Astrophysics Data System (ADS)
Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van
2013-12-01
The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.
NASA Astrophysics Data System (ADS)
Alvanos, Michail; Christoudias, Theodoros
2017-10-01
This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Warren
The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC code QuickPIC. We have also used our suite of PIC codes to make scientific discovery. Highlights include supporting FACET experiments which achieved the milestones of showing high beam loading and energy transfer efficiency from a drive electron beam to a witness electron beam and the discovery of a self-loading regime a for high gradient acceleration of a positron beam. Both of these experimental milestones were published in Nature together with supporting QuickPIC simulation results. Simulation results from QuickPIC were used on the cover of Nature in one case. We are also making progress on using highly resolved QuickPIC simulations to show that ion motion may not lead to catastrophic emittance growth for tightly focused electron bunches loaded into nonlinear wakefields. This could mean that fully self-consistent beam loading scenarios are possible. This work remains in progress. OSIRIS simulations were used to discover how 200 MeV electron rings are formed in LWFA experiments, on how to generate electrons that have a series of bunches on nanometer scale, and how to transport electron beams from (into) plasma sections into (from) conventional beam optic sections.« less
Defect Detection in Superconducting Radiofrequency Cavity Surface Using C + + and OpenCV
NASA Astrophysics Data System (ADS)
Oswald, Samantha; Thomas Jefferson National Accelerator Facility Collaboration
2014-03-01
Thomas Jefferson National Accelerator Facility (TJNAF) uses superconducting radiofrequency (SRF) cavities to accelerate an electron beam. If theses cavities have a small particle or defect, it can degrade the performance of the cavity. The problem at hand is inspecting the cavity for defects, little bubbles of niobium on the surface of the cavity. Thousands of pictures have to be taken of a single cavity and then looked through to see how many defects were found. A C + + program with Open Source Computer Vision (OpenCV) was constructed to reduce the number of hours searching through the images and finds all the defects. Using this code, the SRF group is now able to use the code to identify defects in on-going tests of SRF cavities. Real time detection is the next step so that instead of taking pictures when looking at the cavity, the camera will detect all the defects.
Beam breakup in an advanced linear induction accelerator
Ekdahl, Carl August; Coleman, Joshua Eugene; McCuistian, Brian Trent
2016-07-01
Two linear induction accelerators (LIAs) have been in operation for a number of years at the Los Alamos Dual Axis Radiographic Hydrodynamic Test (DARHT) facility. A new multipulse LIA is being developed. We have computationally investigated the beam breakup (BBU) instability in this advanced LIA. In particular, we have explored the consequences of the choice of beam injector energy and the grouping of LIA cells. We find that within the limited range of options presently under consideration for the LIA architecture, there is little adverse effect on the BBU growth. The computational tool that we used for this investigation wasmore » the beam dynamics code linear accelerator model for DARHT (LAMDA). In conclusion, to confirm that LAMDA was appropriate for this task, we first validated it through comparisons with the experimental BBU data acquired on the DARHT accelerators.« less
Dissemination and support of ARGUS for accelerator applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellano, T.; De Palma, L.; Laneve, D.
2015-07-01
A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)
NASA Astrophysics Data System (ADS)
Feng, Bing
Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.
Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs
NASA Astrophysics Data System (ADS)
Chhotray, Atul; Lazzati, Davide
2018-05-01
We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.
ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings
NASA Astrophysics Data System (ADS)
Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.
2002-12-01
We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Spinks, Christopher D; Murphy, Aron J; Spinks, Warwick L; Lockie, Robert G
2007-02-01
Acceleration is a significant feature of game-deciding situations in the various codes of football. However little is known about the acceleration characteristics of football players, the effects of acceleration training, or the effectiveness of different training modalities. This study examined the effects of resisted sprint (RS) training (weighted sled towing) on acceleration performance (0-15 m), leg power (countermovement jump [CMJ], 5-bound test [5BT], and 50-cm drop jump [50DJ]), gait (foot contact time, stride length, stride frequency, step length, and flight time), and joint (shoulder, elbow, hip, and knee) kinematics in men (N = 30) currently playing soccer, rugby union, or Australian football. Gait and kinematic measurements were derived from the first and second strides of an acceleration effort. Participants were randomly assigned to 1 of 3 treatment conditions: (a) 8-week sprint training of two 1-h sessions x wk(-1) plus RS training (RS group, n = 10), (b) 8-week nonresisted sprint training program of two 1-h sessions x wk(-1) (NRS group, n = 10), or (c) control (n = 10). The results indicated that an 8-week RS training program (a) significantly improves acceleration and leg power (CMJ and 5BT) performance but is no more effective than an 8-week NRS training program, (b) significantly improves reactive strength (50DJ), and (c) has minimal impact on gait and upper- and lower-body kinematics during acceleration performance compared to an 8-week NRS training program. These findings suggest that RS training will not adversely affect acceleration kinematics and gait. Although apparently no more effective than NRS training, this training modality provides an overload stimulus to acceleration mechanics and recruitment of the hip and knee extensors, resulting in greater application of horizontal power.
Cloud-based design of high average power traveling wave linacs
NASA Astrophysics Data System (ADS)
Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.
2017-12-01
The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.
Factors Contributing to Corrosion of Steel Pilings in Duluth-Superior Harbor
2009-11-01
1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 703o 4...Great Lakes. Accelerated corrosion of CS pilings in estua- rine and marine harbors is a global phenomenon.9 The term "accelerated low water corrosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, J.-L.; Furman, M.A.; Azevedo, A.W.
2004-04-19
We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.
Status and future of the 3D MAFIA group of codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeling, F.; Klatt, R.; Krawzcyk, F.
1988-12-01
The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel.more » Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.« less
FPGA acceleration of rigid-molecule docking codes
Sukhwani, B.; Herbordt, M.C.
2011-01-01
Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.
2017-02-01
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...
2016-07-12
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less
Shielding analyses for repetitive high energy pulsed power accelerators
NASA Astrophysics Data System (ADS)
Jow, H. N.; Rao, D. V.
Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.
GPU COMPUTING FOR PARTICLE TRACKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Song, Kai; Muriki, Krishna
2011-03-25
This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less
Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2014-10-01
The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.
Bramble, Marguerite; Maxwell, Hazel; Einboden, Rochelle; Farington, Sally; Say, Richard; Beh, Chin Liang; Stankiewicz, Grace; Munro, Graham; Marembo, Esther; Rickard, Greg
2018-05-30
This Participatory Action Research (PAR) project aimed to engage students from an accelerated 'fast track' nursing program in a mentoring collaboration, using an interdisciplinary partnership intervention with a group of academics. Student participants represented the disciplines of nursing and paramedicine with a high proportion of culturally and linguistically diverse (CALD) students. Nine student mentors were recruited and paired with academics for a three-month 'mentorship partnership' intervention. Data from two pre-intervention workshops and a post-intervention workshop were coded in NVivo11 using thematic analysis. Drawing on social inclusion theory, a qualitative analysis explored an iteration of themes across each action cycle. Emergent themes were: 1) 'building relationships for active engagement', 2) 'voicing cultural and social hierarchies', and 3) 'enacting collegiate community'. The study offers insights into issues for contemporary accelerated course delivery with a diverse student population and highlights future strategies to foster effective student engagement.
Computational Accelerator Physics. Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisognano, J.J.; Mondelli, A.A.
1997-04-01
The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.
2017-03-01
The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.
Warp-X: A new exascale computing platform for beam–plasma simulations
Vay, J. -L.; Almgren, A.; Bell, J.; ...
2018-01-31
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
GPU accelerated manifold correction method for spinning compact binaries
NASA Astrophysics Data System (ADS)
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
Warp-X: A new exascale computing platform for beam–plasma simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, J. -L.; Almgren, A.; Bell, J.
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
Transport calculations and accelerator experiments needed for radiation risk assessment in space.
Sihver, Lembit
2008-01-01
The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.
Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryne, Robert D.
2006-08-10
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less
NASA Astrophysics Data System (ADS)
Ryne, Robert D.
2006-09-01
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.
NASA Astrophysics Data System (ADS)
Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.
1998-04-01
The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.
DOT National Transportation Integrated Search
2014-11-15
The simplified procedure in design codes for determining earthquake response spectra involves : estimating site coefficients to adjust available rock accelerations to site accelerations. Several : investigators have noted concerns with the site coeff...
Particle-in-cell/accelerator code for space-charge dominated beam simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-05-08
Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model.more » The code is guilt atop the Python interpreter language.« less
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.
E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N
2018-03-01
GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
Corkscrew Motion of an Electron Beam due to Coherent Variations in Accelerating Potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Carl August
2016-09-13
Corkscrew motion results from the interaction of fluctuations of beam electron energy with accidental magnetic dipoles caused by misalignment of the beam transport solenoids. Corkscrew is a serious concern for high-current linear induction accelerators (LIA). A simple scaling law for corkscrew amplitude derived from a theory based on a constant-energy beam coasting through a uniform magnetic field has often been used to assess LIA vulnerability to this effect. We use a beam dynamics code to verify that this scaling also holds for an accelerated beam in a non-uniform magnetic field, as in a real accelerator. Results of simulations with thismore » code are strikingly similar to measurements on one of the LIAs at Los Alamos National Laboratory.« less
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
Study of shock-induced combustion using an implicit TVD scheme
NASA Technical Reports Server (NTRS)
Yungster, Shayne
1992-01-01
The supersonic combustion flowfields associated with various hypersonic propulsion systems, such as the ram accelerator, the oblique detonation wave engine, and the scramjet, are being investigated using a new computational fluid dynamics (CFD) code. The code solves the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. It employs an iterative method and a second order differencing scheme to improve computational efficiency. The code is currently being applied to study shock wave/boundary layer interactions in premixed combustible gases, and to investigate the ram accelerator concept. Results obtained for a ram accelerator configuration indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outward and downstream. The combustion process creates a high pressure region over the back of the projectile resulting in a net positive thrust forward.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Sci—Fri PM: Topics — 05: Experience with linac simulation software in a teaching environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, Marco; Harnett, Nicole; Jaffray, David
Medical linear accelerator education is usually restricted to use of academic textbooks and supervised access to accelerators. To facilitate the learning process, simulation software was developed to reproduce the effect of medical linear accelerator beam adjustments on resulting clinical photon beams. The purpose of this report is to briefly describe the method of operation of the software as well as the initial experience with it in a teaching environment. To first and higher orders, all components of medical linear accelerators can be described by analytical solutions. When appropriate calibrations are applied, these analytical solutions can accurately simulate the performance ofmore » all linear accelerator sub-components. Grouped together, an overall medical linear accelerator model can be constructed. Fifteen expressions in total were coded using MATLAB v 7.14. The program was called SIMAC. The SIMAC program was used in an accelerator technology course offered at our institution; 14 delegates attended the course. The professional breakdown of the participants was: 5 physics residents, 3 accelerator technologists, 4 regulators and 1 physics associate. The course consisted of didactic lectures supported by labs using SIMAC. At the conclusion of the course, eight of thirteen delegates were able to successfully perform advanced beam adjustments after two days of theory and use of the linac simulator program. We suggest that this demonstrates good proficiency in understanding of the accelerator physics, which we hope will translate to a better ability to understand real world beam adjustments on a functioning medical linear accelerator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niita, K.; Matsuda, N.; Iwamoto, Y.
The paper presents a brief description of the models incorporated in PHITS and the present status of the code, showing some benchmarking tests of the PHITS code for accelerator facilities and space radiation.
Beam-dynamics codes used at DARHT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Jr., Carl August
Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.
Seismic Safety Of Simple Masonry Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guadagnuolo, Mariateresa; Faella, Giuseppe
2008-07-08
Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less
GPU Optimizations for a Production Molecular Docking Code*
Landaverde, Raphael; Herbordt, Martin C.
2015-01-01
Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667
GPU Optimizations for a Production Molecular Docking Code.
Landaverde, Raphael; Herbordt, Martin C
2014-09-01
Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.
New features in the design code Tlie
NASA Astrophysics Data System (ADS)
van Zeijts, Johannes
1993-12-01
We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.
Empirical evidence for site coefficients in building code provisions
Borcherdt, R.D.
2002-01-01
Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, A.M.M.; Paulson, C.C.; Peacock, M.A.
1995-10-01
A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G.H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decisionmore » has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less
A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.
1995-09-15
A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. Amore » decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less
1983-03-01
values of these functions on the two sides of the slits. The acceleration parameters for the iteration at each point are in the field array WACC (I,J...code will calculate a locally optimum value at each point in the field, these values being placed in the field array WACC . This calculation is...changes in x and y, are calculated by calling subroutine ERROR.) The acceleration parameter is placed in the field 65 array WACC . The addition to the
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.
Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less
VizieR Online Data Catalog: Radiative forces for stellar envelopes (Seaton, 1997)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
2000-02-01
(1) Primary data files, stages.zz These files give data for the calculation of radiative accelerations, GRAD, for elements with nuclear charge zz. Data are available for zz=06, 07, 08, 10, 11, 12, 13, 14, 16, 18, 20, 24, 25, 26 and 28. Calculations are made using data from the Opacity Project (see papers SYMP and IXZ). The data are given for each ionisation stage, j. They are tabulated on a mesh of (T, Ne, CHI) where T is temperature, Ne electron density and CHI is abundance multiplier. The files include data for ionisation fractions, for each (T, Ne). The file contents are described in the paper ACC and as comments in the code add.f (2) Code add.f This reads a file stages.zz and creates a file acc.zz giving radiative accelerations averaged over ionisation stages. The code prompts for names of input and output files. The code, as provided, gives equal weights (as defined in the paper ACC) to all stages. Th weights are set in SUBROUTINE WEIGHTS, which could be changed to give any weights preferred by the user. The dependence of diffusion coefficients on ionisation stage is given by a function ZET, which is defined in SUBROUTINE ZETA. The expressions used for ZET are as given in the paper. The user can change that subroutine if other expressions are preferred. The output file contains values, ZETBAR, of ZET, averaged over ionisation stages. (3) Files acc.zz Radiative accelerations computed using add.f as provided. The user will need to run the code add.f only if it is required to change the subroutines WEIGHTS or ZETA. The contents of the files acc.zz are described in the paper ACC and in comments contained in the code add.f. (4) Code accfit.f This code gives gives radiative accelerations, and some related data, for a stellar model. Methods used to interpolate data to the values of (T, RHO) for the stellar model are based on those used in the code opfit.for (see the paper OPF). The executable file accfit.com runs accfit.f. It uses a list of files given in accfit.files (see that file for further description). The mesh used for the abundance-multiplier CHI on the output file will generally be finer than that used in the input files acc.zz. The mesh to be used is specified on a file chi.dat. For a test run, the stellar model used is given in the file 10000_4.2 (Teff=10000 K, LOG10(g)=4.2) The output file from that test run is acc100004.2. The contents of the output file are described in the paper ACC and as comments in the code accfit.f. (5) The code diff.f This code reads the output file (e.g. acc1000004.2) created by accfit.f. For any specified depth point in the model and value of CHI, it gives values of radiative accelerations, the quantity ZETBAR required for calculation of diffusion coefficients, and Rosseland-mean opacities. The code prompts for input data. It creates a file recording all data calculated. The code diff.f is intended for incorporation, as a set of subroutines, in codes for diffusion calculations. (1 data file).
Ojeda-May, Pedro; Nam, Kwangho
2017-08-08
The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).
AMBER: a PIC slice code for DARHT
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Fawley, William
1999-11-01
The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott; Chen, Yang
2013-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Modeling multi-GeV class laser-plasma accelerators with INF&RNO
NASA Astrophysics Data System (ADS)
Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim
2016-10-01
Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.
Status and future plans for open source QuickPIC
NASA Astrophysics Data System (ADS)
An, Weiming; Decyk, Viktor; Mori, Warren
2017-10-01
QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git
Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian
The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less
Probabilistic Seismic Hazard Assessment for Iraq
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onur, Tuna; Gok, Rengin; Abdulnaby, Wathiq
Probabilistic Seismic Hazard Assessments (PSHA) form the basis for most contemporary seismic provisions in building codes around the world. The current building code of Iraq was published in 1997. An update to this edition is in the process of being released. However, there are no national PSHA studies in Iraq for the new building code to refer to for seismic loading in terms of spectral accelerations. As an interim solution, the new draft building code was considering to refer to PSHA results produced in the late 1990s as part of the Global Seismic Hazard Assessment Program (GSHAP; Giardini et al.,more » 1999). However these results are: a) more than 15 years outdated, b) PGA-based only, necessitating rough conversion factors to calculate spectral accelerations at 0.3s and 1.0s for seismic design, and c) at a probability level of 10% chance of exceedance in 50 years, not the 2% that the building code requires. Hence there is a pressing need for a new, updated PSHA for Iraq.« less
Shock Spectrum Calculation from Acceleration Time Histories
1980-09-01
CLASSIFICATIONe OF THIS PAGE (Uh-e DOg ~ 9--t)____________________ REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM I. REPRT NU9911ACCUIISIO6 NO .3ASCCSPICHT’S...SCE. Oakland CA NAVSCOLCECOFF C35 Port Hueneme. CA,. CO, Code C44A Porn Hueneme. CA NAVSEASYSCOM Code 05M13 (Newhouse) Wash DC; Code 6212, Wash DC
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsouleas, Thomas; Decyk, Viktor
Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less
Programming (Tips) for Physicists & Engineers
Ozcan, Erkcan
2018-02-19
Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.
Programming (Tips) for Physicists & Engineers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozcan, Erkcan
2010-07-13
Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
A comparison of models for supernova remnants including cosmic rays
NASA Astrophysics Data System (ADS)
Kang, Hyesung; Drury, L. O'C.
1992-11-01
A simplified model which can follow the dynamical evolution of a supernova remnant including the acceleration of cosmic rays without carrying out full numerical simulations has been proposed by Drury, Markiewicz, & Voelk in 1989. To explore the accuracy and the merits of using such a model, we have recalculated with the simplified code the evolution of the supernova remnants considered in Jones & Kang, in which more detailed and accurate numerical simulations were done using a full hydrodynamic code based on the two-fluid approximation. For the total energy transferred to cosmic rays the two codes are in good agreement, the acceleration efficiency being the same within a factor of 2 or so. The dependence of the results of the two codes on the closure parameters for the two-fluid approximation is also qualitatively similar. The agreement is somewhat degraded in those cases where the shock is smoothed out by the cosmic rays.
Code comparison for accelerator design and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsa, Z.
1988-01-01
We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary inmore » these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.« less
LEGO: A modular accelerator design code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Donald, M.; Irwin, J.
1997-08-01
An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, themore » code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.« less
Study on radiation production in the charge stripping section of the RISP linear accelerator
NASA Astrophysics Data System (ADS)
Oh, Joo-Hee; Oranj, Leila Mokhtari; Lee, Hee-Seock; Ko, Seung-Kook
2015-02-01
The linear accelerator of the Rare Isotope Science Project (RISP) accelerates 200 MeV/nucleon 238U ions in a multi-charge states. Many kinds of radiations are generated while the primary beam is transported along the beam line. The stripping process using thin carbon foil leads to complicated radiation environments at the 90-degree bending section. The charge distribution of 238U ions after the carbon charge stripper was calculated by using the LISE++ program. The estimates of the radiation environments were carried out by using the well-proved Monte Carlo codes PHITS and FLUKA. The tracks of 238U ions in various charge states were identified using the magnetic field subroutine of the PHITS code. The dose distribution caused by U beam losses for those tracks was obtained over the accelerator tunnel. A modified calculation was applied for tracking the multi-charged U beams because the fundamental idea of PHITS and FLUKA was to transport fully-ionized ion beam. In this study, the beam loss pattern after a stripping section was observed, and the radiation production by heavy ions was studied. Finally, the performance of the PHITS and the FLUKA codes was validated for estimating the radiation production at the stripping section by applying a modified method.
NASA Technical Reports Server (NTRS)
Habbal, Shadia R.; Gurman, Joseph (Technical Monitor)
2003-01-01
Investigations of the physical processes responsible for the acceleration of the solar wind were pursued with the development of two new solar wind codes: a hybrid code and a 2-D MHD code. Hybrid simulations were performed to investigate the interaction between ions and parallel propagating low frequency ion cyclotron waves in a homogeneous plasma. In a low-beta plasma such as the solar wind plasma in the inner corona, the proton thermal speed is much smaller than the Alfven speed. Vlasov linear theory predicts that protons are not in resonance with low frequency ion cyclotron waves. However, non-linear effect makes it possible that these waves can strongly heat and accelerate protons. This study has important implications for study of the corona and the solar wind. Low frequency ion cyclotron waves or Alfven waves are commonly observed in the solar wind. Until now, it is believed that these waves are not able to heat the solar wind plasma unless some cascading processes transfer the energy of these waves to high frequency part. However, this study shows that these waves may directly heat and accelerate protons non-linearly. This process may play an important role in the coronal heating and the solar wind acceleration, at least in some parameter space.
3D Reconnection and SEP Considerations in the CME-Flare Problem
NASA Astrophysics Data System (ADS)
Moschou, S. P.; Cohen, O.; Drake, J. J.; Sokolov, I.; Borovikov, D.; Alvarado Gomez, J. D.; Garraffo, C.
2017-12-01
Reconnection is known to play a major role in particle acceleration in both solar and astrophysical regimes, yet little is known about its connection with the global scales and its comparative contribution in the generation of SEPs with respect to other acceleration mechanisms, such as the shock at a fast CME front, in the presence of a global structure such as a CME. Coupling efforts, combining both particle and global scales, are necessary to answer questions about the fundamentals of the energetic processes evolved. We present such a coupling modeling effort that looks into particle acceleration through reconnection in a self-consistent CME-flare model in both particle and fluid regimes. Of special interest is the supra-thermal component of the acceleration due to the reconnection that will at a later time interact colliding with the solar atmospheric material of the more dense chromospheric layer and radiate in hard X- and γ-rays for super-thermal electrons and protons respectively. Two cutting edge computational codes are used to capture the global CME and flare dynamics, specifically a two fluid MHD code and a 3D PIC code for the flare scales. Finally, we are connecting the simulations with current observations in different wavelengths in an effort to shed light to the unified CME-flare picture.
Advanced propeller noise prediction in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, M. H.; Spence, P. L.
1992-01-01
The time domain code ASSPIN gives acousticians a powerful technique of advanced propeller noise prediction. Except for nonlinear effects, the code uses exact solutions of the Ffowcs Williams-Hawkings equation with exact blade geometry and kinematics. By including nonaxial inflow, periodic loading noise, and adaptive time steps to accelerate computer execution, the development of this code becomes complete.
Su, Huei-Jiun; Hu, Jer-Ming
2012-01-01
Background and Aims The holoparasitic flowering plant Balanophora displays extreme floral reduction and was previously found to have enormous rate acceleration in the nuclear 18S rDNA region. So far, it remains unclear whether non-ribosomal, protein-coding genes of Balanophora also evolve in an accelerated fashion and whether the genes with high substitution rates retain their functionality. To tackle these issues, six different genes were sequenced from two Balanophora species and their rate variation and expression patterns were examined. Methods Sequences including nuclear PI, euAP3, TM6, LFY and RPB2 and mitochondrial matR were determined from two Balanophora spp. and compared with selected hemiparasitic species of Santalales and autotrophic core eudicots. Gene expression was detected for the six protein-coding genes and the expression patterns of the three B-class genes (PI, AP3 and TM6) were further examined across different organs of B. laxiflora using RT-PCR. Key Results Balanophora mitochondrial matR is highly accelerated in both nonsynonymous (dN) and synonymous (dS) substitution rates, whereas the rate variation of nuclear genes LFY, PI, euAP3, TM6 and RPB2 are less dramatic. Significant dS increases were detected in Balanophora PI, TM6, RPB2 and dN accelerations in euAP3. All of the protein-coding genes are expressed in inflorescences, indicative of their functionality. PI is restrictively expressed in tepals, synandria and floral bracts, whereas AP3 and TM6 are widely expressed in both male and female inflorescences. Conclusions Despite the observation that rates of sequence evolution are generally higher in Balanophora than in hemiparasitic species of Santalales and autotrophic core eudicots, the five nuclear protein-coding genes are functional and are evolving at a much slower rate than 18S rDNA. The mechanism or mechanisms responsible for rapid sequence evolution and concomitant rate acceleration for 18S rDNA and matR are currently not well understood and require further study in Balanophora and other holoparasites. PMID:23041381
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Simulation of Combustion Systems with Realistic g-jitter
NASA Technical Reports Server (NTRS)
Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.
2003-01-01
In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Cosmic Rays and Their Radiative Processes in Numerical Cosmology
NASA Technical Reports Server (NTRS)
Ryu, Dongsu; Miniati, Francesco; Jones, Tom W.; Kang, Hyesung
2000-01-01
A cosmological hydrodynamic code is described, which includes a routine to compute cosmic ray acceleration and transport in a simplified way. The routine was designed to follow explicitly diffusive, acceleration at shocks, and second-order Fermi acceleration and adiabatic loss in smooth flows. Synchrotron cooling of the electron population can also be followed. The updated code is intended to be used to study the properties of nonthermal synchrotron emission and inverse Compton scattering from electron cosmic rays in clusters of galaxies, in addition to the properties of thermal bremsstrahlung emission from hot gas. The results of a test simulation using a grid of 128 (exp 3) cells are presented, where cosmic rays and magnetic field have been treated passively and synchrotron cooling of cosmic ray electrons has not been included.
Cosmic Rays and Their Radiative Processes in Numerical Cosmology
NASA Astrophysics Data System (ADS)
Ryu, D.; Miniati, F.; Jones, T. W.; Kang, H.
2000-05-01
A cosmological hydrodynamic code is described, which includes a routine to compute cosmic ray acceleration and transport in a simplified way. The routine was designed to follow explicitly diffusive acceleration at shocks, and second-order Fermi acceleration and adiabatic loss in smooth flows. Synchrotron cooling of the electron population can also be followed. The updated code is intended to be used to study the properties of nonthermal synchrotron emission and inverse Compton scattering from electron cosmic rays in clusters of galaxies, in addition to the properties of thermal bremsstrahlung emission from hot gas. The results of a test simulation using a grid of 1283 cells are presented, where cosmic rays and magnetic field have been treated passively and synchrotron cooling of cosmic ray electrons has not been included.
NASA Astrophysics Data System (ADS)
Murray, Joseph; Dudnikova, Galina; Liu, Tung-Chang; Papadopoulos, Dennis; Sagdeev, Roald; Su, J. J.; UMD MicroPET Team
2014-10-01
Production diagnostic or therapeutic nuclear medicines are either by nuclear reactors or by ion accelerators. In general, diagnostic nuclear radioisotopes have a very short half-life varying from tens of minutes for PET tracers and few hours for SPECT tracers. Thus supplies of PET and SPECT radiotracers are limited by regional production facilities. For example 18F-fluorodeoxyglucose (FDG) is the most desired tracer for positron emission tomography because its 110 minutes half-life is sufficient long for transport from production facilities to nearby users. From nuclear activation to completing image taking must be done within 4 hours. Decentralized production of diagnostic radioisotopes will be idea to make high specific activity radiotracers available to researches and clinicians. 11 C, 13 N, 15 O and 18 F can be produced in the energy range from 10-20 MeV by protons. Protons of energies up to tens of MeV generated by intense laser interacting with hydrogen containing targets have been demonstrated by many groups in the past decade. We use 2D PIC code for proton acceleration, Geant4 Monte Carlo code for nuclei activation to compare the yields and specific activities of short-lived isotopes produced by cyclotron proton beams and laser driven protons.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations
2017-05-08
NUMBER (Include area code) 08 May 2017 Briefing Charts 05 April 2017 - 08 May 2017 Using Kokkos for Performant Cross-Platform Acceleration of Liquid ...ERC Incorporated RQRC AFRL-West Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations 2DISTRIBUTION A: Approved for... Liquid Rocket Combustion Simulation SPACE simulation of rotating detonation engine (courtesy of Dr. Christopher Lietz) 3DISTRIBUTION A: Approved
Computer modeling of test particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Decker, Robert B.
1988-01-01
The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.
Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsa, Z.
1988-06-16
This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.
Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event
NASA Astrophysics Data System (ADS)
Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.
2010-12-01
The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.
GeNN: a code generation framework for accelerated brain simulations
NASA Astrophysics Data System (ADS)
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
GeNN: a code generation framework for accelerated brain simulations.
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-07
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
GeNN: a code generation framework for accelerated brain simulations
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369
Modeling radiation belt dynamics using a 3-D layer method code
NASA Astrophysics Data System (ADS)
Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.
2017-08-01
A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.
Activation assessment of the soil around the ESS accelerator tunnel
NASA Astrophysics Data System (ADS)
Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.; Ene, D.
2018-06-01
Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal directions. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order to estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.
NASA Technical Reports Server (NTRS)
Daw, Murray S.; Mills, Michael J.
2003-01-01
We report on the progress made during the first year of the project. Most of the progress at this point has been on the theoretical and computational side. Here are the highlights: (1) A new code, tailored for high-end desktop computing, now combines modern Accelerated Dynamics (AD) with the well-tested Embedded Atom Method (EAM); (2) The new Accelerated Dynamics allows the study of relatively slow, thermally-activated processes, such as diffusion, which are much too slow for traditional Molecular Dynamics; (3) We have benchmarked the new AD code on a rather simple and well-known process: vacancy diffusion in copper; and (4) We have begun application of the AD code to the diffusion of vacancies in ordered intermetallics.
LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN
NASA Astrophysics Data System (ADS)
Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor
2017-12-01
The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code
NASA Astrophysics Data System (ADS)
Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.
2012-12-01
We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.
UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.
A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)
STUDIES OF A FREE ELECTRON LASER DRIVEN BY A LASER-PLASMA ACCELERATOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, A.; Schroeder, C.; Fawley, W.
A free electron laser (FEL) uses an undulator, a set of alternating magnets producing a periodic magnetic fi eld, to stimulate emission of coherent radiation from a relativistic electron beam. The Lasers, Optical Accelerator Systems Integrated Studies (LOASIS) group at Lawrence Berkeley National Laboratory (LBNL) will use an innovative laserplasma wakefi eld accelerator to produce an electron beam to drive a proposed FEL. In order to optimize the FEL performance, the dependence on electron beam and undulator parameters must be understood. Numerical modeling of the FEL using the simulation code GINGER predicts the experimental results for given input parameters. Amongmore » the parameters studied were electron beam energy spread, emittance, and mismatch with the undulator focusing. Vacuum-chamber wakefi elds were also simulated to study their effect on FEL performance. Energy spread was found to be the most infl uential factor, with output FEL radiation power sharply decreasing for relative energy spreads greater than 0.33%. Vacuum chamber wakefi elds and beam mismatch had little effect on the simulated LOASIS FEL at the currents considered. This study concludes that continued improvement of the laser-plasma wakefi eld accelerator electron beam will allow the LOASIS FEL to operate in an optimal regime, producing high-quality XUV and x-ray pulses.« less
NASA Astrophysics Data System (ADS)
Rueda, Antonio J.; Noguera, José M.; Luque, Adrián
2016-02-01
In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.
NASA Astrophysics Data System (ADS)
Woolsey, L. N.; Cranmer, S. R.
2013-12-01
The study of solar wind acceleration has made several important advances recently due to improvements in modeling techniques. Existing code and simulations test the competing theories for coronal heating, which include reconnection/loop-opening (RLO) models and wave/turbulence-driven (WTD) models. In order to compare and contrast the validity of these theories, we need flexible tools that predict the emergent solar wind properties from a wide range of coronal magnetic field structures such as coronal holes, pseudostreamers, and helmet streamers. ZEPHYR (Cranmer et al. 2007) is a one-dimensional magnetohydrodynamics code that includes Alfven wave generation and reflection and the resulting turbulent heating to accelerate solar wind in open flux tubes. We present the ZEPHYR output for a wide range of magnetic field geometries to show the effect of the magnetic field profiles on wind properties. We also investigate the competing acceleration mechanisms found in ZEPHYR to determine the relative importance of increased gas pressure from turbulent heating and the separate pressure source from the Alfven waves. To do so, we developed a code that will become publicly available for solar wind prediction. This code, TEMPEST, provides an outflow solution based on only one input: the magnetic field strength as a function of height above the photosphere. It uses correlations found in ZEPHYR between the magnetic field strength at the source surface and the temperature profile of the outflow solution to compute the wind speed profile based on the increased gas pressure from turbulent heating. With this initial solution, TEMPEST then adds in the Alfven wave pressure term to the modified Parker equation and iterates to find a stable solution for the wind speed. This code, therefore, can make predictions of the wind speeds that will be observed at 1 AU based on extrapolations from magnetogram data, providing a useful tool for empirical forecasting of the sol! ar wind.
NASA Astrophysics Data System (ADS)
Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo
2016-09-01
Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.
NASA Astrophysics Data System (ADS)
Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian
2017-11-01
Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.
CFD Code Survey for Thrust Chamber Application
NASA Technical Reports Server (NTRS)
Gross, Klaus W.
1990-01-01
In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...
2016-12-01
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
NASA Astrophysics Data System (ADS)
Li, Jin; Liu, Zilong
2017-12-01
Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Taiee; Bauer, Johannes M.; Liu, James C.
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
Microwave and Electron Beam Computer Programs
1988-06-01
Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B
2012-09-11
In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.
Activation Assessment of the Soil Around the ESS Accelerator Tunnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.
Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 different chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal direction. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order tomore » estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portmann, Greg; /LBL, Berkeley; Safranek, James
The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovelace, III, Henry H.
In accelerator physics, models of a given machine are used to predict the behaviors of the beam, magnets, and radiofrequency cavities. The use of the computational model has become wide spread to ease the development period of the accelerator lattice. There are various programs that are used to create lattices and run simulations of both transverse and longitudinal beam dynamics. The programs include Methodical Accelerator Design(MAD) MAD8, MADX, Zgoubi, Polymorphic Tracking Code (PTC), and many others. In this discussion the BMAD (Baby Methodical Accelerator Design) is presented as an additional tool in creating and simulating accelerator lattices for the studymore » of beam dynamics in the Relativistic Heavy Ion Collider (RHIC).« less
A multicenter collaborative approach to reducing pediatric codes outside the ICU.
Hayes, Leslie W; Dobyns, Emily L; DiGiovine, Bruno; Brown, Ann-Marie; Jacobson, Sharon; Randall, Kelly H; Wathen, Beth; Richard, Heather; Schwab, Carolyn; Duncan, Kathy D; Thrasher, Jodi; Logsdon, Tina R; Hall, Matthew; Markovitz, Barry
2012-03-01
The Child Health Corporation of America formed a multicenter collaborative to decrease the rate of pediatric codes outside the ICU by 50%, double the days between these events, and improve the patient safety culture scores by 5 percentage points. A multidisciplinary pediatric advisory panel developed a comprehensive change package of process improvement strategies and measures for tracking progress. Learning sessions, conference calls, and data submission facilitated collaborative group learning and implementation. Twenty Child Health Corporation of America hospitals participated in this 12-month improvement project. Each hospital identified at least 1 noncritical care target unit in which to implement selected elements of the change package. Strategies to improve prevention, detection, and correction of the deteriorating patient ranged from relatively simple, foundational changes to more complex, advanced changes. Each hospital selected a broad range of change package elements for implementation using rapid-cycle methodologies. The primary outcome measure was reduction in codes per 1000 patient days. Secondary outcomes were days between codes and change in patient safety culture scores. Code rate for the collaborative did not decrease significantly (3% decrease). Twelve hospitals reported additional data after the collaborative and saw significant improvement in code rates (24% decrease). Patient safety culture scores improved by 4.5% to 8.5%. A complex process, such as patient deterioration, requires sufficient time and effort to achieve improved outcomes and create a deeply embedded culture of patient safety. The collaborative model can accelerate improvements achieved by individual institutions.
Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Use of color-coded sleeve shutters accelerates oscillograph channel selection
NASA Technical Reports Server (NTRS)
Bouchlas, T.; Bowden, F. W.
1967-01-01
Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.
Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albright, Brian James; Yin, Lin; Stark, David James
One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem
Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection
NASA Astrophysics Data System (ADS)
Alejandro Munoz Sepulveda, Patricio
2017-04-01
The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.
MAPA: an interactive accelerator design code with GUI
NASA Astrophysics Data System (ADS)
Bruhwiler, David L.; Cary, John R.; Shasharina, Svetlana G.
1999-06-01
The MAPA code is an interactive accelerator modeling and design tool with an X/Motif GUI. MAPA has been developed in C++ and makes full use of object-oriented features. We present an overview of its features and describe how users can independently extend the capabilities of the entire application, including the GUI. For example, a user can define a new model for a focusing or accelerating element. If the appropriate form is followed, and the new element is "registered" with a single line in the specified file, then the GUI will fully support this user-defined element type after it has been compiled and then linked to the existing application. In particular, the GUI will bring up windows for modifying any relevant parameters of the new element type. At present, one can use the GUI for phase space tracking, finding fixed points and generating line plots for the Twiss parameters, the dispersion and the accelerator geometry. The user can define new types of simulations which the GUI will automatically support by providing a menu option to execute the simulation and subsequently rendering line plots of the resulting data.
Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System
NASA Astrophysics Data System (ADS)
Aizawa, Naoto; Iwasaki, Tomohiko
2014-06-01
Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.
MuSim, a Graphical User Interface for Multiple Simulation Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland
2016-06-01
MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less
Mizuno, T; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Watanabe, K; Dairaku, M; Sakamoto, K; Inoue, T
2010-02-01
Heat load on acceleration grids by secondary particles such as electrons, neutrals, and positive ions, is a key issue for long pulse acceleration of negative ion beams. Complicated behaviors of the secondary particles in multiaperture, multigrid (MAMuG) accelerator have been analyzed using electrostatic accelerator Monte Carlo code. The analytical result is compared to experimental one obtained in a long pulse operation of a MeV accelerator, of which second acceleration grid (A2G) was removed for simplification of structure. The analytical results show that relatively high heat load on the third acceleration grid (A3G) since stripped electrons were deposited mainly on A3G. This heat load on the A3G can be suppressed by installing the A2G. Thus, capability of MAMuG accelerator is demonstrated for suppression of heat load due to secondary particles by the intermediate grids.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
Use of Existing CAD Models for Radiation Shielding Analysis
NASA Technical Reports Server (NTRS)
Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.
2015-01-01
The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators
Luo, Yun
2015-08-29
SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less
1987-09-01
Evaluation Commnand &_. ADMASS Coly, 1W~., and ZIP Code ) 7b. ADDRESS (C01y, State, wid ZIP Code ) Dugwiay, Utahi 84022-5000 Aberdeen Proving Ground...Aency_________________________ 9L AoOMS(CRY, 0to, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS Hazardous Waste Environmental RLsearch Lab PROGRAM PROJECT TASK...CLASSIFICATION 0 UNO.ASSIFIEDAIJNLIMITED 0l SAME AS RPT. 03 OTIC USERS UNCLA.SSIFIED 22a. RAWE OF RESPONSIBLE INDIVIDUAL 22b TELEPHONE (Include Area Code ) I
New estimation method of neutron skyshine for a high-energy particle accelerator
NASA Astrophysics Data System (ADS)
Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook
2016-09-01
A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.
Calculations of beam dynamics in Sandia linear electron accelerators, 1984
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poukey, J.W.; Coleman, P.D.
1985-03-01
A number of code and analytic studies were made during 1984 which pertain to the Sandia linear accelerators MABE and RADLAC. In this report the authors summarize the important results of the calculations. New results include a better understanding of gap-induced radial oscillations, leakage currents in a typical MABE gas, emittance growth in a beam passing through a series of gaps, some new diocotron results, and the latest diode simulations for both accelerators. 23 references, 30 figures, 1 table.
Symplectic orbit and spin tracking code for all-electric storage rings
NASA Astrophysics Data System (ADS)
Talman, Richard M.; Talman, John D.
2015-07-01
Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring "trap." At the "magic" kinetic energy of 232.792 MeV, proton spins are "frozen," for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10-29e -cm or greater will produce a statistically significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial "symplectification"). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to "resurrect," or reverse engineer, the "AGS-analog" all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM's. The companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.
38 CFR 9.14 - Accelerated Benefits.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...
38 CFR 9.14 - Accelerated Benefits.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...
38 CFR 9.14 - Accelerated Benefits.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...
38 CFR 9.14 - Accelerated Benefits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...
38 CFR 9.14 - Accelerated Benefits.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...
NASA Technical Reports Server (NTRS)
Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.;
2006-01-01
FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pang, Xiaoying; Rybarcyk, Larry
HPSim is a GPU-accelerated online multi-particle beam dynamics simulation tool for ion linacs. It was originally developed for use on the Los Alamos 800-MeV proton linac. It is a “z-code” that contains typical linac beam transport elements. The linac RF-gap transformation utilizes transit-time-factors to calculate the beam acceleration therein. The space-charge effects are computed using the 2D SCHEFF (Space CHarge EFFect) algorithm, which calculates the radial and longitudinal space charge forces for cylindrically symmetric beam distributions. Other space- charge routines to be incorporated include the 3D PICNIC and a 3D Poisson solver. HPSim can simulate beam dynamics in drift tubemore » linacs (DTLs) and coupled cavity linacs (CCLs). Elliptical superconducting cavity (SC) structures will also be incorporated into the code. The computational core of the code is written in C++ and accelerated using the NVIDIA CUDA technology. Users access the core code, which is wrapped in Python/C APIs, via Pythons scripts that enable ease-of-use and automation of the simulations. The overall linac description including the EPICS PV machine control parameters is kept in an SQLite database that also contains calibration and conversion factors required to transform the machine set points into model values used in the simulation.« less
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
JASMIN: Japanese-American study of muon interactions and neutron detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakashima, Hiroshi; /JAEA, Ibaraki; Mokhov, N.V.
Experimental studies of shielding and radiation effects at Fermi National Accelerator Laboratory (FNAL) have been carried out under collaboration between FNAL and Japan, aiming at benchmarking of simulation codes and study of irradiation effects for upgrade and design of new high-energy accelerator facilities. The purposes of this collaboration are (1) acquisition of shielding data in a proton beam energy domain above 100GeV; (2) further evaluation of predictive accuracy of the PHITS and MARS codes; (3) modification of physics models and data in these codes if needed; (4) establishment of irradiation field for radiation effect tests; and (5) development of amore » code module for improved description of radiation effects. A series of experiments has been performed at the Pbar target station and NuMI facility, using irradiation of targets with 120 GeV protons for antiproton and neutrino production, as well as the M-test beam line (M-test) for measuring nuclear data and detector responses. Various nuclear and shielding data have been measured by activation methods with chemical separation techniques as well as by other detectors such as a Bonner ball counter. Analyses with the experimental data are in progress for benchmarking the PHITS and MARS15 codes. In this presentation recent activities and results are reviewed.« less
A preliminary design of the collinear dielectric wakefield accelerator
NASA Astrophysics Data System (ADS)
Zholents, A.; Gai, W.; Doran, S.; Lindberg, R.; Power, J. G.; Strelnikov, N.; Sun, Y.; Trakhtenberg, E.; Vasserman, I.; Jing, C.; Kanareykin, A.; Li, Y.; Gao, Q.; Shchegolkov, D. Y.; Simakov, E. I.
2016-09-01
A preliminary design of the multi-meter long collinear dielectric wakefield accelerator that achieves a highly efficient transfer of the drive bunch energy to the wakefields and to the witness bunch is considered. It is made from 0.5 m long accelerator modules containing a vacuum chamber with dielectric-lined walls, a quadrupole wiggler, an rf coupler, and BPM assembly. The single bunch breakup instability is a major limiting factor for accelerator efficiency, and the BNS damping is applied to obtain the stable multi-meter long propagation of a drive bunch. Numerical simulations using a 6D particle tracking computer code are performed and tolerances to various errors are defined.
Influence of Ionization and Beam Quality on Interaction of TW-Peak CO2 Laser with Hydrogen Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman
3D numerical simulations of the interaction of a powerful CO2 laser with hydrogen jets demonstrating the role of ionization and laser beam quality are presented. Simulations are performed in support of the plasma wakefield accelerator experiments being conducted at the BNL Accelerator Test Facility (ATF). The CO2 laser at BNL ATF has several potential advantages for laser wakefield acceleration compared to widely used solid-state lasers. SPACE, a parallel relativistic Particle-in-Cell code, developed at SBU and BNL, has been used in these studies. A novelty of the code is its set of efficient atomic physics algorithms that compute ionization and recombinationmore » rates on the grid and transfer them to particles. The primary goal of the initial BNL experiments was to characterize the plasma density by measuring the sidebands in the spectrum of the probe laser. Simulations, that resolve hydrogen ionization and laser spectra, help explain several trends that were observed in the experiments.« less
Direct measurement of the image displacement instability in a linear induction accelerator
NASA Astrophysics Data System (ADS)
Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.
2017-06-01
The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, it appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. This becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.
Laser beam coupling with capillary discharge plasma for laser wakefield acceleration applications
NASA Astrophysics Data System (ADS)
Bagdasarov, G. A.; Sasorov, P. V.; Gasilov, V. A.; Boldarev, A. S.; Olkhovskaya, O. G.; Benedetti, C.; Bulanov, S. S.; Gonsalves, A.; Mao, H.-S.; Schroeder, C. B.; van Tilborg, J.; Esarey, E.; Leemans, W. P.; Levato, T.; Margarone, D.; Korn, G.
2017-08-01
One of the most robust methods, demonstrated to date, of accelerating electron beams by laser-plasma sources is the utilization of plasma channels generated by the capillary discharges. Although the spatial structure of the installation is simple in principle, there may be some important effects caused by the open ends of the capillary, by the supplying channels etc., which require a detailed 3D modeling of the processes. In the present work, such simulations are performed using the code MARPLE. First, the process of capillary filling with cold hydrogen before the discharge is fired, through the side supply channels is simulated. Second, the simulation of the capillary discharge is performed with the goal to obtain a time-dependent spatial distribution of the electron density near the open ends of the capillary as well as inside the capillary. Finally, to evaluate the effectiveness of the beam coupling with the channeling plasma wave guide and of the electron acceleration, modeling of the laser-plasma interaction was performed with the code INF&RNO.
Direct measurement of the image displacement instability in a linear induction accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.
The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, itmore » appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. Finally, this becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.« less
Direct measurement of the image displacement instability in a linear induction accelerator
Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.
2017-06-19
The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, itmore » appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. Finally, this becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.« less
GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy
NASA Astrophysics Data System (ADS)
Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro
2011-03-01
The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.
Numerical Simulation of MIG for 42 GHz, 200 kW Gyrotron
NASA Astrophysics Data System (ADS)
Singh, Udaybir; Bera, Anirban; Kumar, Narendra; Purohit, L. P.; Sinha, Ashok K.
2010-06-01
A triode type magnetron injection gun (MIG) of a 42 GHz, 200 kW gyrotron for an Indian TOKAMAK system is designed by using the commercially available code EGUN. The operating voltages of the modulating anode and the accelerating anode are 29 kV and 65 kV respectively. The operating mode of the gyrotron is TE03 and it is operated in fundamental harmonic. The simulated results of MIG obtained with the EGUN code are validated with another trajectory code TRAK.
1991-05-31
benchmarks ............ .... . .. .. . . .. 220 Appendix G : Source code of the Aquarius Prolog compiler ........ . 224 Chapter I Introduction "You’re given...notation, a tool that is used throughout the compiler’s implementation. Appendix F lists the source code of the C and Prolog benchmarks. Appendix G lists the...source code of the compilcr. 5 "- standard form Prolog / a-sfomadon / head umrvln Convert to tmeikernel Prol g vrans~fonaon 1symbolic execution
New methods in WARP, a particle-in-cell code for space-charge dominated beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grote, D., LLNL
1998-01-12
The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less
Chaotic dynamics in accelerator physics
NASA Astrophysics Data System (ADS)
Cary, J. R.
1992-11-01
Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.
Generation of low-emittance electron beams in electrostatic accelerators for FEL applications
NASA Astrophysics Data System (ADS)
Chen, Teng; Elias, Luis R.
1995-02-01
This paper reports results of transverse emittance studies and beam propagation in electrostatic accelerators for free electron laser applications. In particular, we discuss emittance growth analysis of a low current electron beam system consisting of a miniature thermoionic electron gun and a National Electrostatics Accelerator (NEC) tube. The emittance growth phenomenon is discussed in terms of thermal effects in the electron gun cathode and aberrations produced by field gradient changes occurring inside the electron gun and throughout the accelerator tube. A method of reducing aberrations using a magnetic solenoidal field is described. Analysis of electron beam emittance was done with the EGUN code. Beam propagation along the accelerator tube was studied using a cylindrically symmetric beam envelope equation that included beam self-fields and the external accelerator fields which were derived from POISSON simulations.
2009-01-01
Background Tunicates represent a key metazoan group as the sister-group of vertebrates within chordates. The six complete mitochondrial genomes available so far for tunicates have revealed distinctive features. Extensive gene rearrangements and particularly high evolutionary rates have been evidenced with regard to other chordates. This peculiar evolutionary dynamics has hampered the reconstruction of tunicate phylogenetic relationships within chordates based on mitogenomic data. Results In order to further understand the atypical evolutionary dynamics of the mitochondrial genome of tunicates, we determined the complete sequence of the solitary ascidian Herdmania momus. This genome from a stolidobranch ascidian presents the typical tunicate gene content with 13 protein-coding genes, 2 rRNAs and 24 tRNAs which are all encoded on the same strand. However, it also presents a novel gene arrangement, highlighting the extreme plasticity of gene order observed in tunicate mitochondrial genomes. Probabilistic phylogenetic inferences were conducted on the concatenation of the 13 mitochondrial protein-coding genes from representatives of major metazoan phyla. We show that whereas standard homogeneous amino acid models support an artefactual sister position of tunicates relative to all other bilaterians, the CAT and CAT+BP site- and time-heterogeneous mixture models place tunicates as the sister-group of vertebrates within monophyletic chordates. Moreover, the reference phylogeny indicates that tunicate mitochondrial genomes have experienced a drastic acceleration in their evolutionary rate that equally affects protein-coding and ribosomal-RNA genes. Conclusion This is the first mitogenomic study supporting the new chordate phylogeny revealed by recent phylogenomic analyses. It illustrates the beneficial effects of an increased taxon sampling coupled with the use of more realistic amino acid substitution models for the reconstruction of animal phylogeny. PMID:19922605
Singh, Tiratha Raj; Tsagkogeorga, Georgia; Delsuc, Frédéric; Blanquart, Samuel; Shenkar, Noa; Loya, Yossi; Douzery, Emmanuel Jp; Huchon, Dorothée
2009-11-17
Tunicates represent a key metazoan group as the sister-group of vertebrates within chordates. The six complete mitochondrial genomes available so far for tunicates have revealed distinctive features. Extensive gene rearrangements and particularly high evolutionary rates have been evidenced with regard to other chordates. This peculiar evolutionary dynamics has hampered the reconstruction of tunicate phylogenetic relationships within chordates based on mitogenomic data. In order to further understand the atypical evolutionary dynamics of the mitochondrial genome of tunicates, we determined the complete sequence of the solitary ascidian Herdmania momus. This genome from a stolidobranch ascidian presents the typical tunicate gene content with 13 protein-coding genes, 2 rRNAs and 24 tRNAs which are all encoded on the same strand. However, it also presents a novel gene arrangement, highlighting the extreme plasticity of gene order observed in tunicate mitochondrial genomes. Probabilistic phylogenetic inferences were conducted on the concatenation of the 13 mitochondrial protein-coding genes from representatives of major metazoan phyla. We show that whereas standard homogeneous amino acid models support an artefactual sister position of tunicates relative to all other bilaterians, the CAT and CAT+BP site- and time-heterogeneous mixture models place tunicates as the sister-group of vertebrates within monophyletic chordates. Moreover, the reference phylogeny indicates that tunicate mitochondrial genomes have experienced a drastic acceleration in their evolutionary rate that equally affects protein-coding and ribosomal-RNA genes. This is the first mitogenomic study supporting the new chordate phylogeny revealed by recent phylogenomic analyses. It illustrates the beneficial effects of an increased taxon sampling coupled with the use of more realistic amino acid substitution models for the reconstruction of animal phylogeny.
Cryogenic distribution box for Fermi National Accelerator Laboratory
NASA Astrophysics Data System (ADS)
Svehla, M. R.; Bonnema, E. C.; Cunningham, E. K.
2017-12-01
Meyer Tool & Mfg., Inc (Meyer Tool) of Oak Lawn, Illinois is manufacturing a cryogenic distribution box for Fermi National Accelerator Laboratory (FNAL). The distribution box will be used for the Muon-to-electron conversion (Mu2e) experiment. The box includes twenty-seven cryogenic valves, two heat exchangers, a thermal shield, and an internal nitrogen separator vessel, all contained within a six-foot diameter ASME coded vacuum vessel. This paper discusses the design and manufacturing processes that were implemented to meet the unique fabrication requirements of this distribution box. Design and manufacturing features discussed include: 1) Thermal strap design and fabrication, 2) Evolution of piping connections to heat exchangers, 3) Nitrogen phase separator design, 4) ASME code design of vacuum vessel, and 5) Cryogenic valve installation.
RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations
NASA Astrophysics Data System (ADS)
Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy
RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.
NASA Astrophysics Data System (ADS)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex
2018-04-01
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.
Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.
Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less
Physical Models for Particle Tracking Simulations in the RF Gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shishlo, Andrei P.; Holmes, Jeffrey A.
2015-06-01
This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Calculations of skyshine from an intense portable electron linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, G.P.; Hughes, H.G.; Fry, D.A.
1994-12-31
The MCNP Monte carlo code has been used at Los Alamos to calculate skyshine and terrain albedo efects from an intense portable electron linear accelerator that is to be used by the Russian Federation to radiograph nuclear weapons that may have been damaged by accidents. Relative dose rate profiles have been calculated. The design of the accelerator, along with a diagram, is presented.
Applicability of a Bonner Shere technique for pulsed neutron in 120 GeV proton facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanami, T.; Hagiwara, M.; Iwase, H.
2008-02-01
The data on neutron spectra and intensity behind shielding are important for radiation safety design of high-energy accelerators since neutrons are capable of penetrating thick shielding and activating materials. Corresponding particle transport codes--that involve physics models of neutron and other particle production, transportation, and interaction--have been developed and used world-wide [1-8]. The results of these codes have been ensured through plenty of comparisons with experimental results taken in simple geometries. For neutron generation and transport, several related experiments have been performed to measure neutron spectra, attenuation length and reaction rates behind shielding walls of various thicknesses and materials in energymore » range up to several hundred of MeV [9-11]. The data have been used to benchmark--and modify if needed--the simulation modes and parameters in the codes, as well as the reference data for radiation safety design. To obtain such kind of data above several hundred of MeV, Japan-Fermi National Accelerator Laboratory (FNAL) collaboration for shielding experiments has been started in 2007, based on suggestion from the specialist meeting of shielding, Shielding Aspects of Target, Irradiation Facilities (SATIF), because of very limited data available in high-energy region (see, for example, [12]). As a part of this shielding experiment, a set of Bonner sphere (BS) was tested at the antiproton production target facility (pbar target station) at FNAL to obtain neutron spectra induced by a 120-GeV proton beam in concrete and iron shielding. Generally, utilization of an active detector around high-energy accelerators requires an improvement on its readout to overcome burst of secondary radiation since the accelerator delivers an intense beam to a target in a short period after relatively long acceleration period. In this paper, we employ BS for a spectrum measurement of neutrons that penetrate the shielding wall of the pbar target station in FNAL.« less
Symplectic orbit and spin tracking code for all-electric storage rings
Talman, Richard M.; Talman, John D.
2015-07-22
Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to “resurrect,” or reverse engineer, the “AGS-analog” all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM’s. As a result, the companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.« less
Accurate and efficient spin integration for particle accelerators
Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...
2015-02-01
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less
Sheath field dynamics from time-dependent acceleration of laser-generated positrons
NASA Astrophysics Data System (ADS)
Kerr, Shaun; Fedosejevs, Robert; Link, Anthony; Williams, Jackson; Park, Jaebum; Chen, Hui
2017-10-01
Positrons produced in ultraintense laser-matter interactions are accelerated by the sheath fields established by fast electrons, typically resulting in quasi-monoenergetic beams. Experimental results from OMEGA EP show higher order features developing in the positron spectra when the laser energy exceeds one kilojoule. 2D PIC simulations using the LSP code were performed to give insight into these spectral features. They suggest that for high laser energies multiple, distinct phases of acceleration can occur due to time-dependent sheath field acceleration. The detailed dynamics of positron acceleration will be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, and funded by LDRD 17-ERD-010.
NASA Astrophysics Data System (ADS)
de Soria-Santacruz Pich, M.; Drozdov, A.; Menietti, J. D.; Garrett, H. B.; Kellerman, A. C.; Shprits, Y. Y.
2016-12-01
The radiation belts of Jupiter are the most intense of all the planets in the solar system. Their source is not well understood but they are believed to be the result of inward radial transport beyond the orbit of Io. In the case of Earth, the radiation belts are the result of local acceleration and radial diffusion from whistler waves, and it has been suggested that this type of acceleration may also be significant in the magnetosphere of Jupiter. Multiple diffusion codes have been developed to study the dynamics of the Earth's magnetosphere and characterize the interaction between relativistic electrons and whistler waves; in the present paper we adapt one of these codes, the two-dimensional version of the Versatile Electron Radiation Belt (VERB) computer code, to the case of the Jovian magnetosphere. We use realistic parameters to determine the importance of whistler emissions in the acceleration and loss of electrons in the Jovian magnetosphere. More specifically, we use an extensive wave survey from the Galileo spacecraft and initial conditions derived from the Galileo Interim Radiation Electron Model version 2 (GIRE2) to estimate the pitch angle and energy diffusion of the electron population due to lower and upper band whistlers as a function of latitude and radial distance from the planet, and we calculate the decay rates that result from this interaction.
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Irwin, J.; Nosochkov, Y.
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Yunhai; Irwin, John; Nosochkov, Yuri
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.
NASA Technical Reports Server (NTRS)
Liu, Wei; Petrosian, Vahe; Mariska, John T.
2009-01-01
Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated the simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a -10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a non thermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.
Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1985-01-01
This Quarterly Report deals with the deployment maneuver of a single-axis, vertical constellation with three masses. A new, easy to handle, computer code that simulates the two-dimensional dynamics of the constellation has been implemented. This computer code is used for designing control laws for the deployment maneuver that minimizes the acceleration level of the low-g platform during the maneuver.
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
Simulations of the plasma dynamics in high-current ion diodes
NASA Astrophysics Data System (ADS)
Boine-Frankenheim, O.; Pointon, T. D.; Mehlhorn, T. A.
Our time-implicit fluid/Particle-In-Cell (PIC) code DYNAID [1]is applied to problems relevant for applied- B ion diode operation. We present simulations of the laser ion source, which will soon be employed on the SABRE accelerator at SNL, and of the dynamics of the anode source plasma in the applied electric and magnetic fields. DYNAID is still a test-bed for a higher-dimensional simulation code. Nevertheless, the code can already give new theoretical insight into the dynamics of plasmas in pulsed power devices.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Rozendaal, H. L.
1977-01-01
A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.
GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid
NASA Astrophysics Data System (ADS)
Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua
2016-10-01
A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.
1987-01-01
DESIGNS FOR THE ACCELERATED CAT -ASVAB * PROJECT Peter H. Stoloff DTIC’- , " SELECTE -NOV 2 3 987 A Division of Hudson Institute CENTER FOR NAVAL ANALYSES...65153M C0031 SI TITLE (Include Security Classification) Equivalent-Groups Versus Single-Group Equating Designs For The Accelerated CAT -ASVAB Project...GROUP ACAP (Accelerated CAT -ASVAB Program), Aptitude tests, ASVAB (Armed 05 10 Services Vocational Aptitude Battery), CAT (Computerized Adaptive Test
Testing Bonner sphere spectrometers in the JRC-IRMM mono-energetic neutron beams
NASA Astrophysics Data System (ADS)
Bedogni, R.; Domingo, C.; Esposito, A.; Chiti, M.; García-Fusté, M. J.; Lovestam, G.
2010-08-01
Within the framework of the Euratom Transnational Access programme, a specific sub-programme, called NUDAME (neutron data measurements at IRMM), was dedicated to support neutron measurement experiments at the accelerator-based facilities of the JRC-IRMM Geel, Belgium. In this context, the INFN-LNF and UAB groups undertook two separate experiments at the 7 MV Van de Graaff facility, aimed at testing their Bonner sphere spectrometers (BSS) with mono-energetic neutron beams. Both research groups routinely employ the BSS in neutron spectra measurements for radiation protection dosimetry purposes, where accurate knowledge of the BSS response is a mandatory condition for correct dose evaluations. This paper presents the results obtained by both groups, focusing on: (1) the comparison between the value of neutron fluence provided as reference data and that obtained by applying the FRUIT unfolding code to the measured BSS data and (2) the experimental validation of the response matrices of the BSSs, previously derived with Monte Carlo simulations.
Recombinant blood group proteins for use in antibody screening and identification tests.
Seltsam, Axel; Blasczyk, Rainer
2009-11-01
The present review elucidates the potentials of recombinant blood group proteins (BGPs) for red blood cell (RBC) antibody detection and identification in pretransfusion testing and the achievements in this field so far. Many BGPs have been eukaryotically and prokaryotically expressed in sufficient quantity and quality for RBC antibody testing. Recombinant BGPs can be incorporated in soluble protein reagents or solid-phase assays such as ELISA, color-coded microsphere and protein microarray chip-based techniques. Because novel recombinant protein-based assays use single antigens, a positive reaction of a serum with the recombinant protein directly indicates the presence and specificity of the target antibody. Inversely, conventional RBC-based assays use panels of human RBCs carrying a huge number of blood group antigens at the same time and require negative reactions of samples with antigen-negative cells for indirect determination of antibody specificity. Because of their capacity for single-step, direct RBC antibody determination, recombinant protein-based assays may greatly facilitate and accelerate the identification of common and rare RBC antibodies.
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
NASA Astrophysics Data System (ADS)
Oranj, Leila Mokhtari; Lee, Hee-Seock; Leitner, Mario Santana
2017-12-01
In Korea, a heavy ion accelerator facility (RAON) has been designed for production of rare isotopes. The 90° bending section of this accelerator includes a 1.3- μm-carbon stripper followed by two dipole magnets and other devices. An incident beam is 18.5 MeV/n 238U33+,34+ ions passing through the carbon stripper at the beginning of the section. The two dipoles are tuned to transport 238U ions with specific charge states of 77+, 78+, 79+, 80+ and 81+. Then other ions will be deflected at the bends and cause beam losses. These beam losses are a concern to the devices of transport/beam line. The absorbed dose in devices and prompt dose in the tunnel were calculated using the FLUKA code in order to estimate radiation damage of such devices located at the 90° bending section and for the radiation protection. A novel method to transport multi-charged 238U ions beam was applied in the FLUKA code by using charge distribution of 238U ions after the stripper obtained from LISE++ code. The calculated results showed that the absorbed dose in the devices is influenced by the geometrical arrangement. The maximum dose was observed at the coils of first, second, fourth and fifth quadruples placed after first dipole magnet. The integrated doses for 30 years of operation with 9.5 p μA 238U ions were about 2 MGy for those quadrupoles. In conclusion, the protection of devices particularly, quadruples would be necessary to reduce the damage to devices. Moreover, results showed that the prompt radiation penetrated within the first 60 - 120 cm of concrete.
ON THE PROBLEM OF PARTICLE GROUPINGS IN A TRAVELING WAVE LINEAR ACCELERATOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhileyko, G.I.
1957-01-01
A linear accelerator with traveling'' waves may be used for the production of especially short electron momenta, although in many cases the grouping capacity of the accelerator is not sufficient. Theoretically the case is derived in which grouping of the electrons takes place in the accelerator itself. (With 3 illustrations and 1 Slavic Reference). (TCO)
Advanced Accelerators for Medical Applications
NASA Astrophysics Data System (ADS)
Uesaka, Mitsuru; Koyama, Kazuyoshi
We review advanced accelerators for medical applications with respect to the following key technologies: (i) higher RF electron linear accelerator (hereafter “linac”); (ii) optimization of alignment for the proton linac, cyclotron and synchrotron; (iii) superconducting magnet; (iv) laser technology. Advanced accelerators for medical applications are categorized into two groups. The first group consists of compact medical linacs with high RF, cyclotrons and synchrotrons downsized by optimization of alignment and superconducting magnets. The second group comprises laser-based acceleration systems aimed of medical applications in the future. Laser plasma electron/ion accelerating systems for cancer therapy and laser dielectric accelerating systems for radiation biology are mentioned. Since the second group has important potential for a compact system, the current status of the established energy and intensity and of the required stability are given.
Advanced Accelerators for Medical Applications
NASA Astrophysics Data System (ADS)
Uesaka, Mitsuru; Koyama, Kazuyoshi
We review advanced accelerators for medical applications with respect to the following key technologies: (i) higher RF electron linear accelerator (hereafter "linac"); (ii) optimization of alignment for the proton linac, cyclotron and synchrotron; (iii) superconducting magnet; (iv) laser technology. Advanced accelerators for medical applications are categorized into two groups. The first group consists of compact medical linacs with high RF, cyclotrons and synchrotrons downsized by optimization of alignment and superconducting magnets. The second group comprises laserbased acceleration systems aimed of medical applications in the future. Laser plasma electron/ion accelerating systems for cancer therapy and laser dielectric accelerating systems for radiation biology are mentioned. Since the second group has important potential for a compact system, the current status of the established energy and intensity and of the required stability are given.
NASA Astrophysics Data System (ADS)
Farley, Zachary; Aslangil, Denis; Banerjee, Arindam; Lawrie, Andrew G. W.
2017-11-01
An implicit large eddy simulation (ILES) code, MOBILE, is used to explore the growth rate of the mixing layer width of the acceleration-driven Rayleigh-Taylor instability (RTI) under variable acceleration histories. The sets of computations performed consist of a series of accel-decel-accel (ADA) cases in addition to baseline constant acceleration and accel-decel (AD) cases. The ADA cases are a series of varied times for the second acceleration reversal (t2) and show drastic differences in the growth rates. Upon the deceleration phase, the kinetic energy of the flow is shifted into internal wavelike patterns. These waves are evidenced by the examined differences in growth rate in the second acceleration phase for the set of ADA cases. Here, we investigate global parameters that include mixing width, growth rates and the anisotropy tensor for the kinetic energy to better understand the behavior of the growth during the re-acceleration period. Authors acknowledge financial support from DOE-SSAA (DE-NA0003195) and NSF CAREER (#1453056) awards.
Multilevel acceleration of scattering-source iterations with application to electron transport
Drumm, Clif; Fan, Wesley
2017-08-18
Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less
Distant star clusters of the Milky Way in MOND
NASA Astrophysics Data System (ADS)
Haghi, H.; Baumgardt, H.; Kroupa, P.
2011-03-01
We determine the mean velocity dispersion of six Galactic outer halo globular clusters, AM 1, Eridanus, Pal 3, Pal 4, Pal 15, and Arp 2 in the weak acceleration regime to test classical vs. modified Newtonian dynamics (MOND). Owing to the nonlinearity of MOND's Poisson equation, beyond tidal effects, the internal dynamics of clusters is affected by the external field in which they are immersed. For the studied clusters, particle accelerations are much lower than the critical acceleration a0 of MOND, but the motion of stars is neither dominated by internal accelerations (ai ≫ ae) nor external accelerations (ae ≫ ai). We use the N-body code N-MODY in our analysis, which is a particle-mesh-based code with a numerical MOND potential solver developed by Ciotti et al. (2006, ApJ, 640, 741) to derive the line-of-sight velocity dispersion by adding the external field effect. We show that Newtonian dynamics predicts a low-velocity dispersion for each cluster, while in modified Newtonian dynamics the velocity dispersion is much higher. We calculate the minimum number of measured stars necessary to distinguish between Newtonian gravity and MOND with the Kolmogorov-Smirnov test. We also show that for most clusters it is necessary to measure the velocities of between 30 to 80 stars to distinguish between both cases. Therefore the observational measurement of the line-of-sight velocity dispersion of these clusters will provide a test for MOND.
Vibration acceleration promotes bone formation in rodent models
Uchida, Ryohei; Nakata, Ken; Kawano, Fuminori; Yonetani, Yasukazu; Ogasawara, Issei; Nakai, Naoya; Mae, Tatsuo; Matsuo, Tomohiko; Tachibana, Yuta; Yokoi, Hiroyuki; Yoshikawa, Hideki
2017-01-01
All living tissues and cells on Earth are subject to gravitational acceleration, but no reports have verified whether acceleration mode influences bone formation and healing. Therefore, this study was to compare the effects of two acceleration modes, vibration and constant (centrifugal) accelerations, on bone formation and healing in the trunk using BMP 2-induced ectopic bone formation (EBF) mouse model and a rib fracture healing (RFH) rat model. Additionally, we tried to verify the difference in mechanism of effect on bone formation by accelerations between these two models. Three groups (low- and high-magnitude vibration and control-VA groups) were evaluated in the vibration acceleration study, and two groups (centrifuge acceleration and control-CA groups) were used in the constant acceleration study. In each model, the intervention was applied for ten minutes per day from three days after surgery for eleven days (EBF model) or nine days (RFH model). All animals were sacrificed the day after the intervention ended. In the EBF model, ectopic bone was evaluated by macroscopic and histological observations, wet weight, radiography and microfocus computed tomography (micro-CT). In the RFH model, whole fracture-repaired ribs were excised with removal of soft tissue, and evaluated radiologically and histologically. Ectopic bones in the low-magnitude group (EBF model) had significantly greater wet weight and were significantly larger (macroscopically and radiographically) than those in the other two groups, whereas the size and wet weight of ectopic bones in the centrifuge acceleration group showed no significant difference compared those in control-CA group. All ectopic bones showed calcified trabeculae and maturated bone marrow. Micro-CT showed that bone volume (BV) in the low-magnitude group of EBF model was significantly higher than those in the other two groups (3.1±1.2mm3 v.s. 1.8±1.2mm3 in high-magnitude group and 1.3±0.9mm3 in control-VA group), but BV in the centrifuge acceleration group had no significant difference compared those in control-CA group. Union rate and BV in the low-magnitude group of RFH model were also significantly higher than those in the other groups (Union rate: 60% v.s. 0% in the high-magnitude group and 10% in the control-VA group, BV: 0.69±0.30mm3 v.s. 0.15±0.09mm3 in high-magnitude group and 0.22±0.17mm3 in control-VA group). BV/TV in the low-magnitude group of RFH model was significantly higher than that in control-VA group (59.4±14.9% v.s. 35.8±13.5%). On the other hand, radiographic union rate (10% in centrifuge acceleration group v.s. 20% in control-CA group) and micro-CT parameters in RFH model were not significantly different between two groups in the constant acceleration studies. Radiographic images of non-union rib fractures showed cartilage at the fracture site and poor new bone formation, whereas union samples showed only new bone. In conclusion, low-magnitude vibration acceleration promoted bone formation at the trunk in both BMP-induced ectopic bone formation and rib fracture healing models. However, the micro-CT parameters were not similar between two models, which suggested that there might be difference in the mechanism of effect by vibration between two models. PMID:28264058
Vibration acceleration promotes bone formation in rodent models.
Uchida, Ryohei; Nakata, Ken; Kawano, Fuminori; Yonetani, Yasukazu; Ogasawara, Issei; Nakai, Naoya; Mae, Tatsuo; Matsuo, Tomohiko; Tachibana, Yuta; Yokoi, Hiroyuki; Yoshikawa, Hideki
2017-01-01
All living tissues and cells on Earth are subject to gravitational acceleration, but no reports have verified whether acceleration mode influences bone formation and healing. Therefore, this study was to compare the effects of two acceleration modes, vibration and constant (centrifugal) accelerations, on bone formation and healing in the trunk using BMP 2-induced ectopic bone formation (EBF) mouse model and a rib fracture healing (RFH) rat model. Additionally, we tried to verify the difference in mechanism of effect on bone formation by accelerations between these two models. Three groups (low- and high-magnitude vibration and control-VA groups) were evaluated in the vibration acceleration study, and two groups (centrifuge acceleration and control-CA groups) were used in the constant acceleration study. In each model, the intervention was applied for ten minutes per day from three days after surgery for eleven days (EBF model) or nine days (RFH model). All animals were sacrificed the day after the intervention ended. In the EBF model, ectopic bone was evaluated by macroscopic and histological observations, wet weight, radiography and microfocus computed tomography (micro-CT). In the RFH model, whole fracture-repaired ribs were excised with removal of soft tissue, and evaluated radiologically and histologically. Ectopic bones in the low-magnitude group (EBF model) had significantly greater wet weight and were significantly larger (macroscopically and radiographically) than those in the other two groups, whereas the size and wet weight of ectopic bones in the centrifuge acceleration group showed no significant difference compared those in control-CA group. All ectopic bones showed calcified trabeculae and maturated bone marrow. Micro-CT showed that bone volume (BV) in the low-magnitude group of EBF model was significantly higher than those in the other two groups (3.1±1.2mm3 v.s. 1.8±1.2mm3 in high-magnitude group and 1.3±0.9mm3 in control-VA group), but BV in the centrifuge acceleration group had no significant difference compared those in control-CA group. Union rate and BV in the low-magnitude group of RFH model were also significantly higher than those in the other groups (Union rate: 60% v.s. 0% in the high-magnitude group and 10% in the control-VA group, BV: 0.69±0.30mm3 v.s. 0.15±0.09mm3 in high-magnitude group and 0.22±0.17mm3 in control-VA group). BV/TV in the low-magnitude group of RFH model was significantly higher than that in control-VA group (59.4±14.9% v.s. 35.8±13.5%). On the other hand, radiographic union rate (10% in centrifuge acceleration group v.s. 20% in control-CA group) and micro-CT parameters in RFH model were not significantly different between two groups in the constant acceleration studies. Radiographic images of non-union rib fractures showed cartilage at the fracture site and poor new bone formation, whereas union samples showed only new bone. In conclusion, low-magnitude vibration acceleration promoted bone formation at the trunk in both BMP-induced ectopic bone formation and rib fracture healing models. However, the micro-CT parameters were not similar between two models, which suggested that there might be difference in the mechanism of effect by vibration between two models.
NASA Astrophysics Data System (ADS)
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2016-10-01
Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.
A Population Synthesis Study of Terrestrial Gamma-ray Flashes
NASA Astrophysics Data System (ADS)
Cramer, E. S.; Briggs, M. S.; Stanbro, M.; Dwyer, J. R.; Mailyan, B. G.; Roberts, O.
2017-12-01
In astrophysics, population synthesis models are tools used to determine what mix of stars could be consistent with the observations, e.g. how the intrinsic mass-to-light ratio changes by the measurement process. A similar technique could be used to understand the production of TGFs. The models used for this type of population study probe the conditions of electron acceleration inside the high electric field regions of thunderstorms, i.e. acceleration length, electric field strength, and beaming angles. In this work, we use a Monte Carlo code to generate bremsstrahlung photons from relativistic electrons that are accelerated by a large-scale RREA thunderstorm electric field. The code simulates the propagation of photons through the atmosphere at various source altitudes, where they interact with air via Compton scattering, pair production, and photoelectric absorption. We then show the differences in the hardness ratio at spacecraft altitude between these different simulations and compare them with TGF data from Fermi-GBM. Such comparisons can lead to constraints that can be applied to popular TGF beaming models, and help determine whether the population presented in this study is consistent or not with reality.
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
Optimization of a ΔE - E detector for 41Ca AMS
NASA Astrophysics Data System (ADS)
Hosoya, Seiji; Sasa, Kimikazu; Matsunaka, Tetsuya; Takahashi, Tsutomu; Matsumura, Masumi; Matsumura, Hiroshi; Sundquist, Mark; Stodola, Mark; Sueki, Keisuke
2017-09-01
A series of nuclides (14C, 26Al, and 36Cl) was measured using the 12UD Pelletron tandem accelerator before replacement by the horizontal 6 MV tandem accelerator at the University of Tsukuba Tandem Accelerator Complex (UTTAC). This paper considers the modification of the accelerator mass spectrometry (AMS) measurement parameters to suit the current 6 MV tandem accelerator setup (e.g., terminal voltage, detected ion charge state, gas pressure, and entrance window material in detector). The Particle and Heavy Ion Transport code System (PHITS) was also used to simulate AMS measurement to determine the best conditions to suppress isobaric interference. The spectra of 41Ca and 41K were then successfully separated and their nuclear spectra were identified; the system achieved a background level of 41Ca/40Ca ∼ 6 ×10-14 .
Advances/applications of MAGIC and SOS
NASA Astrophysics Data System (ADS)
Warren, Gary; Ludeking, Larry; Nguyen, Khanh; Smithe, David; Goplen, Bruce
1993-12-01
MAGIC and SOS have been applied to investigate a variety of accelerator-related devices. Examples include high brightness electron guns, beam-RF interactions in klystrons, cold-test modes in an RFQ and in RF sources, and a high-quality, flexible, electron gun with operating modes appropriate for gyrotrons, peniotrons, and other RF sources. Algorithmic improvements for PIC have been developed and added to MAGIC and SOS to facilitate these modeling efforts. Two new field algorithms allow improved control of computational numerical noise and selective control of harmonic modes in RF cavities. An axial filter in SOS accelerates simulations in cylindrical coordinates. The recent addition of an export/import feature now allows long devices to be modeled in sections. Interfaces have been added to receive electromagnetic field information from the Poisson group of codes and from EGUN and to send beam information to PARMELA for subsequent tracing of bunches through beam optics. Post-processors compute and display beam properties including geometric, normalized, and slice emittances, and phase-space parameters, and video. VMS, UNIX, and DOS versions are supported, with migration underway toward windows environments.
Candidate molten salt investigation for an accelerator driven subcritical core
NASA Astrophysics Data System (ADS)
Sooby, E.; Baty, A.; Beneš, O.; McIntyre, P.; Pogue, N.; Salanne, M.; Sattarov, A.
2013-09-01
We report a design for accelerator-driven subcritical fission in a molten salt core (ADSMS) that utilizes a fuel salt composed of NaCl and transuranic (TRU) chlorides. The ADSMS core is designed for fast neutronics (28% of neutrons >1 MeV) to optimize TRU destruction. The choice of a NaCl-based salt offers benefits for corrosion, operating temperature, and actinide solubility as compared with LiF-based fuel salts. A molecular dynamics (MD) code has been used to estimate properties of the molten salt system which are important for ADSMS design but have never been measured experimentally. Results from the MD studies are reported. Experimental measurements of fuel salt properties and studies of corrosion and radiation damage on candidate metals for the core vessel are anticipated. A special thanks is due to Prof. Paul Madden for introducing the ADSMS group to the concept of using the molten salt as the spallation target, rather than a conventional heavy metal spallation target. This feature helps to optimize this core as a Pu/TRU burner.
Cınar, Yasin; Cingü, Abdullah Kürşat; Türkcü, Fatih Mehmet; Çınar, Tuba; Yüksel, Harun; Özkurt, Zeynep Gürsel; Çaça, Ihsan
2014-09-01
To compare outcomes of accelerated and conventional corneal cross-linking (CXL) for progressive keratoconus (KC). Patients were divided into two groups as the accelerated CXL group and the conventional CXL group. The uncorrected distant visual acuity (UDVA), corrected distant visual acuity (CDVA), refraction and keratometric values were measured preoperatively and postoperatively. The data of the two groups were compared statistically. The mean UDVA and CDVA were better at the six month postoperative when compared with preoperative values in two groups. While change in UDVA and CDVA was statistically significant in the accelerated CXL group (p = 0.035 and p = 0.047, respectively), it did not reach statistical significance in the conventional CXL group (p = 0.184 and p = 0.113, respectively). The decrease in the mean corneal power (Km) and maximum keratometric value (Kmax) were statistically significant in both groups (p = 0.012 and 0.046, respectively in the accelerated CXL group, p = 0.012 and 0.041, respectively, in the conventional CXL group). There was no statistically significant difference in visual and refractive results between the two groups (p > 0.05). Refractive and visual results of the accelerated CXL method and the conventional CXL method for the treatment of KC in short time period were similar. The accelerated CXL method faster and provide high throughput of the patients.
Lee, Patrick; Maynard, G.; Audet, T. L.; ...
2016-11-16
The dynamics of electron acceleration driven by laser wakefield is studied in detail using the particle-in-cell code WARP with the objective to generate high-quality electron bunches with narrow energy spread and small emittance, relevant for the electron injector of a multistage accelerator. Simulation results, using experimentally achievable parameters, show that electron bunches with an energy spread of ~11% can be obtained by using an ionization-induced injection mechanism in a mm-scale length plasma. By controlling the focusing of a moderate laser power and tailoring the longitudinal plasma density profile, the electron injection beginning and end positions can be adjusted, while themore » electron energy can be finely tuned in the last acceleration section.« less
NASA Astrophysics Data System (ADS)
Du, S.; Guo, F.; Zank, G. P.; Li, X.; Stanier, A.
2017-12-01
The interaction between magnetic flux ropes has been suggested as a process that leads to efficient plasma energization and particle acceleration (e.g., Drake et al. 2013; Zank et al. 2014). However, the underlying plasma dynamics and acceleration mechanisms are subject to examination of numerical simulations. As a first step of this effort, we carry out 2D fully kinetic simulations using the VPIC code to study the plasma energization and particle acceleration during coalescence of two magnetic flux ropes. Our analysis shows that the reconnection electric field and compression effect are important in plasma energization. The results may help understand the energization process associated with magnetic flux ropes frequently observed in the solar wind near the heliospheric current sheet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yuhe; Mazur, Thomas R.; Green, Olga
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less
Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold
2016-01-01
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123
Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold
2016-07-01
The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.
Structural Affects on the Slamming Pressures of High-Speed Planing Craft
NASA Astrophysics Data System (ADS)
Ikeda, Christine; Taravella, Brandon; Judge, Carolyn
2015-11-01
High-speed planing craft are subjected to repeated slamming events in waves that can be very extreme depending on the wave topography, impact angle of the ship, forward speed of the ship, encounter angle, and height out of the water. The current work examines this fluid-structure interaction problem through the use of wedge drop experiments and a CFD code. In the first set of experiments, a rigid 20-degree deadrise angle wedge was dropped from a range of heights (0 <= H <= 0 . 6 m) and while pressures and accelerations of the slam even were measured. The second set of experiments involved a flexible-bottom 15-degree deadrise angle wedge that was dropped from from the same range of heights. In these second experiments, the pressures, accelerations, and strain field were measured. Both experiments are compared with a non-linear boundary value flat cylinder theory code in order to compare the pressure loading. The code assumes a rigid structure, therefore, the results between the code and the first experiment are in good agreement. The second experiment shows pressure magnitudes that are lower than the predictions due to the energy required to deform the structure. Funding from University of New Orleans Office of Research and Sponsored Programs and the Office of Naval Research.
Seismic design parameters - A user guide
Leyendecker, E.V.; Frankel, A.D.; Rukstales, K.S.
2001-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings (1997 NEHRP Provisions) introduced seismic design procedure that is based on the explicit use of spectral response acceleration rather than the traditional peak ground acceleration and/or peak ground velocity or zone factors. The spectral response accelerations are obtained from spectral response acceleration maps accompanying the report. Maps are available for the United States and a number of U.S. territories. Since 1997 additional codes and standards have also adopted seismic design approaches based on the same procedure used in the NEHRP Provisions and the accompanying maps. The design documents using the 1997 NEHRP Provisions procedure may be divided into three categories -(1) Design of New Construction, (2) Design and Evaluation of Existing Construction, and (3) Design of Residential Construction. A CD-ROM has been prepared for use in conjunction with the design documents in each of these three categories. The spectral accelerations obtained using the software on the CD are the same as those that would be obtained by using the maps accompanying the design documents. The software has been prepared to operate on a personal computer using a Windows (Microsoft Corporation) operating environment and a point and click type of interface. The user can obtain the spectral acceleration values that would be obtained by use of the maps accompanying the design documents, include site factors appropriate for the Site Class provided by the user, calculate a response spectrum that includes the site factor, and plot a response spectrum. Sites may be located by providing the latitude-longitude or zip code for all areas covered by the maps. All of the maps used in the various documents are also included on the CDROM
Optical control system for high-voltage terminals
Bicek, John J.
1978-01-01
An optical control system for the control of devices in the terminal of an electrostatic accelerator includes a laser that is modulated by a series of preselected codes produced by an encoder. A photodiode receiver is placed in the laser beam at the high-voltage terminal of an electrostatic accelerator. A decoder connected to the photodiode decodes the signals to provide control impulses for a plurality of devices at the high voltage of the terminal.
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1991-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; ...
2018-01-12
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less
Beam dynamics simulation of HEBT for the SSC-linac injector
NASA Astrophysics Data System (ADS)
Li, Xiao-Ni; Yuan, You-Jin; Xiao, Chen; He, Yuan; Wang, Zhi-Jun; Sheng, Li-Na
2012-11-01
The SSC-linac (a new injector for the Separated Sector Cyclotron) is being designed in the HIRFL (Heavy Ion Research Facility in Lanzhou) system to accelerate 238U34+ from 3.72 keV/u to 1.008 MeV/u. As a part of the SSC-linac injector, the HEBT (high energy beam transport) has been designed by using the TRACE-3D code and simulated by the 3D PIC (particle-in-cell) Track code. The total length of the HEBT is about 12 meters and a beam line of about 6 meters are shared with the exiting beam line of the HIRFL system. The simulation results show that the particles can be delivered efficiently in the HEBT and the particles at the exit of the HEBT well match the acceptance of the SSC for further acceleration. The dispersion is eliminated absolutely in the HEBT. The space-charge effect calculated by the Track code is inconspicuous. According to the simulation, more than 60 percent of the particles from the ion source can be transported into the acceptance of the SSC.
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1990-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Shielding from space radiations
NASA Technical Reports Server (NTRS)
Chang, C. Ken; Badavi, Forooz F.; Tripathi, Ram K.
1993-01-01
This Progress Report covering the period of December 1, 1992 to June 1, 1993 presents the development of an analytical solution to the heavy ion transport equation in terms of Green's function formalism. The mathematical development results are recasted into a highly efficient computer code for space applications. The efficiency of this algorithm is accomplished by a nonperturbative technique of extending the Green's function over the solution domain. The code may also be applied to accelerator boundary conditions to allow code validation in laboratory experiments. Results from the isotopic version of the code with 59 isotopes present for a single layer target material, for the case of an iron beam projectile at 600 MeV/nucleon in water is presented. A listing of the single layer isotopic version of the code is included.
NASA Technical Reports Server (NTRS)
Mclallin, K. L.; Kofskey, M. G.; Civinskas, K. C.
1983-01-01
The performance of a variable-area stator, axial flow power turbine was determined in a cold-air component research rig for two inlet duct configurations. The two ducts were an interstage diffuser duct and an accelerated-flow inlet duct which produced stator inlet boundary layer flow blockages of 11 percent and 3 percent, respectively. Turbine blade total efficiency at design point was measured to be 5.3 percent greater with the accelerated-flow inlet duct installed due to the reduction in inlet blockage. Blade component measurements show that of this performance improvement, 35 percent occurred in the stator and 65 percent occurred in the rotor. Analysis of inlet duct internal flow using an Axisymmetric Diffuser Duct Code (ADD Code) were in substantial agreement with the test data.
Accelerating the Pace of Protein Functional Annotation With Intel Xeon Phi Coprocessors.
Feinstein, Wei P; Moreno, Juana; Jarrell, Mark; Brylinski, Michal
2015-06-01
Intel Xeon Phi is a new addition to the family of powerful parallel accelerators. The range of its potential applications in computationally driven research is broad; however, at present, the repository of scientific codes is still relatively limited. In this study, we describe the development and benchmarking of a parallel version of eFindSite, a structural bioinformatics algorithm for the prediction of ligand-binding sites in proteins. Implemented for the Intel Xeon Phi platform, the parallelization of the structure alignment portion of eFindSite using pragma-based OpenMP brings about the desired performance improvements, which scale well with the number of computing cores. Compared to a serial version, the parallel code runs 11.8 and 10.1 times faster on the CPU and the coprocessor, respectively; when both resources are utilized simultaneously, the speedup is 17.6. For example, ligand-binding predictions for 501 benchmarking proteins are completed in 2.1 hours on a single Stampede node equipped with the Intel Xeon Phi card compared to 3.1 hours without the accelerator and 36.8 hours required by a serial version. In addition to the satisfactory parallel performance, porting existing scientific codes to the Intel Xeon Phi architecture is relatively straightforward with a short development time due to the support of common parallel programming models by the coprocessor. The parallel version of eFindSite is freely available to the academic community at www.brylinski.org/efindsite.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Rozendaal, H. L.
1977-01-01
A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talman, Richard M.; Talman, John D.
Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to “resurrect,” or reverse engineer, the “AGS-analog” all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM’s. As a result, the companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.« less
Bergueiro, J; Igarzabal, M; Sandin, J C Suarez; Somacal, H R; Vento, V Thatar; Huck, H; Valda, A A; Repetto, M; Kreiner, A J
2011-12-01
Several ion sources have been developed and an ion source test stand has been mounted for the first stage of a Tandem-Electrostatic-Quadrupole facility For Accelerator-Based Boron Neutron Capture Therapy. A first source, designed, fabricated and tested is a dual chamber, filament driven and magnetically compressed volume plasma proton ion source. A 4 mA beam has been accelerated and transported into the suppressed Faraday cup. Extensive simulations of the sources have been performed using both 2D and 3D self-consistent codes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Designing a Dielectric Laser Accelerator on a Chip
NASA Astrophysics Data System (ADS)
Niedermayer, Uwe; Boine-Frankenheim, Oliver; Egenolf, Thilo
2017-07-01
Dielectric Laser Acceleration (DLA) achieves gradients of more than 1GeV/m, which are among the highest in non-plasma accelerators. The long-term goal of the ACHIP collaboration is to provide relativistic (>1 MeV) electrons by means of a laser driven microchip accelerator. Examples of ’’slightly resonant” dielectric structures showing gradients in the range of 70% of the incident laser field (1 GV/m) for electrons with beta=0.32 and 200% for beta=0.91 are presented. We demonstrate the bunching and acceleration of low energy electrons in dedicated ballistic buncher and velocity matched grating structures. However, the design gradient of 500 MeV/m leads to rapid defocusing. Therefore we present a scheme to bunch the beam in stages, which does not only reduce the energy spread, but also the transverse defocusing. The designs are made with a dedicated homemade 6D particle tracking code.
EDITORIAL: Laser and plasma accelerators Laser and plasma accelerators
NASA Astrophysics Data System (ADS)
Bingham, Robert
2009-02-01
This special issue on laser and plasma accelerators illustrates the rapid advancement and diverse applications of laser and plasma accelerators. Plasma is an attractive medium for particle acceleration because of the high electric field it can sustain, with studies of acceleration processes remaining one of the most important areas of research in both laboratory and astrophysical plasmas. The rapid advance in laser and accelerator technology has led to the development of terawatt and petawatt laser systems with ultra-high intensities and short sub-picosecond pulses, which are used to generate wakefields in plasma. Recent successes include the demonstration by several groups in 2004 of quasi-monoenergetic electron beams by wakefields in the bubble regime with the GeV energy barrier being reached in 2006, and the energy doubling of the SLAC high-energy electron beam from 42 to 85 GeV. The electron beams generated by the laser plasma driven wakefields have good spatial quality with energies ranging from MeV to GeV. A unique feature is that they are ultra-short bunches with simulations showing that they can be as short as a few femtoseconds with low-energy spread, making these beams ideal for a variety of applications ranging from novel high-brightness radiation sources for medicine, material science and ultrafast time-resolved radiobiology or chemistry. Laser driven ion acceleration experiments have also made significant advances over the last few years with applications in laser fusion, nuclear physics and medicine. Attention is focused on the possibility of producing quasi-mono-energetic ions with energies ranging from hundreds of MeV to GeV per nucleon. New acceleration mechanisms are being studied, including ion acceleration from ultra-thin foils and direct laser acceleration. The application of wakefields or beat waves in other areas of science such as astrophysics and particle physics is beginning to take off, such as the study of cosmic accelerators considered by Chen et al where the driver, instead of being a laser, is a whistler wave known as the magnetowave plasma accelerator. The application to electron--positron plasmas that are found around pulsars is studied in the paper by Shukla, and to muon acceleration by Peano et al. Electron wakefield experiments are now concentrating on control and optimisation of high-quality beams that can be used as drivers for novel radiation sources. Studies by Thomas et al show that filamentation has a deleterious effect on the production of high quality mono-energetic electron beams and is caused by non-optimal choice of focusing geometry and/or electron density. It is crucial to match the focusing with the right plasma parameters and new types of plasma channels are being developed, such as the magnetically controlled plasma waveguide reported by Froula et al. The magnetic field provides a pressure profile shaping the channel to match the guiding conditions of the incident laser, resulting in predicted electron energies of 3GeV. In the forced laser-wakefield experiment Fang et al show that pump depletion reduces or inhibits the acceleration of electrons. One of the earlier laser acceleration concepts known as the beat wave may be revived due to the work by Kalmykov et al who report on all-optical control of nonlinear focusing of laser beams, allowing for stable propagation over several Rayleigh lengths with pre-injected electrons accelerated beyond 100 MeV. With the increasing number of petawatt lasers, attention is being focused on different acceleration regimes such as stochastic acceleration by counterpropagating laser pulses, the relativistic mirror, or the snow-plough effect leading to single-step acceleration reported by Mendonca. During wakefield acceleration the leading edge of the pulse undergoes frequency downshifting and head erosion as the laser energy is transferred to the wake while the trailing edge of the laser pulse undergoes frequency up-shift. This is commonly known as photon deceleration and acceleration and is the result of a modulational instability. Simulations reported by Trines et al using a photon-in-cell code or wave kinetic code agree extremely well with experimental observation. Ion acceleration is actively studied; for example the papers by Robinson, Macchi, Marita and Tripathi all discuss different types of acceleration mechanisms from direct laser acceleration, Coulombic explosion and double layers. Ion acceleration is an exciting development that may have great promise in oncology. The surprising application is in muon acceleration, demonstrated by Peano et al who show that counterpropagating laser beams with variable frequencies drive a beat structure with variable phase velocity, leading to particle trapping and acceleration with possible application to a future muon collider and neutrino factory. Laser and plasma accelerators remain one of the exciting areas of plasma physics with applications in many areas of science ranging from laser fusion, novel high-brightness radiation sources, particle physics and medicine. The guest editor would like to thank all authors and referees for their invaluable contributions to this special issue.
Laser-driven dielectric electron accelerator for radiobiology researches
NASA Astrophysics Data System (ADS)
Koyama, Kazuyoshi; Matsumura, Yosuke; Uesaka, Mitsuru; Yoshida, Mitsuhiro; Natsui, Takuya; Aimierding, Aimidula
2013-05-01
In order to estimate the health risk associated with a low dose radiation, the fundamental process of the radiation effects in a living cell must be understood. It is desired that an electron bunch or photon pulse precisely knock a cell nucleus and DNA. The required electron energy and electronic charge of the bunch are several tens keV to 1 MeV and 0.1 fC to 1 fC, respectively. The smaller beam size than micron is better for the precise observation. Since the laser-driven dielectric electron accelerator seems to suite for the compact micro-beam source, a phase-modulation-masked-type laser-driven dielectric accelerator was studied. Although the preliminary analysis made a conclusion that a grating period and an electron speed must satisfy the matching condition of LG/λ = v/c, a deformation of a wavefront in a pillar of the grating relaxed the matching condition and enabled the slow electron to be accelerated. The simulation results by using the free FDTD code, Meep, showed that the low energy electron of 20 keV felt the acceleration field strength of 20 MV/m and gradually felt higher field as the speed was increased. Finally the ultra relativistic electron felt the field strength of 600 MV/m. The Meep code also showed that a length of the accelerator to get energy of 1 MeV was 3.8 mm, the required laser power and energy were 11 GW and 350 mJ, respectively. Restrictions on the laser was eased by adopting sequential laser pulses. If the accelerator is illuminated by sequential N pulses, the pulse power, pulse width and the pulse energy are reduced to 1/N, 1/N and 1/N2, respectively. The required laser power per pulse is estimated to be 2.2 GW when ten pairs of sequential laser pulse is irradiated.
Study of the transverse beam motion in the DARHT Phase II accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yu-Jiuan; Fawley, W M; Houck, T L
1998-08-20
The accelerator for the second-axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will accelerate a 4-kA, 3-MeV, 2--µs long electron current pulse to 20 MeV. The energy variation of the beam within the flat-top portion of the current pulse is (plus or equal to) 0.5%. The performance of the DARHT Phase II radiographic machine requires the transverse beam motion to be much less than the beam spot size which is about 1.5 mm diameter on the x-ray converter. In general, the leading causes of the transverse beam motion in an accelerator are the beam breakup instability (BBU) andmore » the corkscrew motion. We have modeled the transverse beam motion in the DARHT Phase II accelerator with various magnetic tunes and accelerator cell configurations by using the BREAKUP code. The predicted sensitivity of corkscrew motion and BBU growth to different tuning algorithms will be presented.« less
Lattice Calibration with Turn-By-Turn BPM Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Xiaobiao; /SLAC; Sebek, James
2012-07-02
Turn-by-turn beam position monitor (BPM) data from multiple BPMs are fitted with a tracking code to calibrate magnet strengths in a manner similar to the well known LOCO code. Simulation shows that this turn-by-turn method can be a quick and efficient way for optics calibration. The method is applicable to both linacs and ring accelerators. Experimental results for a section of the SPEAR3 ring is also shown.
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
Neutron skyshine from end stations of the Continuous Electron Beam Accelerator Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Rai-Ko S.
1991-12-01
The MORSE{_}CG code from Oak Ridge National Laboratory was applied to the estimation of the neutron skyshine from three end stations of the Continuous Electron Beam Accelerator Facility (CEBAF), Newport News, VA. Calculations with other methods and an experiment had been directed at assessing the annual neutron dose equivalent at the site boundary. A comparison of results obtained with different methods is given, and the effect of different temperatures and humidities will be discussed.
Neutron skyshine from end stations of the Continuous Electron Beam Accelerator Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Rai-Ko S.
1991-12-01
The MORSE{ }CG code from Oak Ridge National Laboratory was applied to the estimation of the neutron skyshine from three end stations of the Continuous Electron Beam Accelerator Facility (CEBAF), Newport News, VA. Calculations with other methods and an experiment had been directed at assessing the annual neutron dose equivalent at the site boundary. A comparison of results obtained with different methods is given, and the effect of different temperatures and humidities will be discussed.
High spatial resolution measurements in a single stage ram accelerator
NASA Technical Reports Server (NTRS)
Hinkey, J. B.; Burnham, E. A.; Bruckner, A. P.
1992-01-01
High spatial resolution experimental tube wall pressure measurements of ram accelerator gas dynamic phenomena are presented in this paper. The ram accelerator is a ramjet-in-tube device which operates in a manner similar to that of a conventional ramjet. The projectile resembles the centerbody of a ramjet and travels supersonically through a tube filled with a combustible gaseous mixture, with the tube acting as the outer cowling. Pressure data are recorded as the projectile passes by sensors mounted in the tube wall at various locations along the tube. Utilization of special highly instrumented sections of tube has allowed the recording of gas dynamic phenomena with high resolution. High spatial resolution tube wall pressure data from the three regimes of propulsion studied to date (subdetonative, transdetonative, and superdetonative) in a single stage gas mixture are presented and reveal the three-dimensional character of the flow field induced by projectile fins and the canting of the fins and the canting of the projectile body relative to the tube wall. Also presented for comparison to the experimental data are calculations made with an inviscid, three-dimensional CFD code. The knowledge gained from these experiments and simulations is useful in understanding the underlying nature of ram accelerator propulsive regimes, as well as assisting in the validation of three-dimensional CFD coded which model unsteady, chemically reactive flows.
Convergence of the Ponderomotive Guiding Center approximation in the LWFA
NASA Astrophysics Data System (ADS)
Silva, Thales; Vieira, Jorge; Helm, Anton; Fonseca, Ricardo; Silva, Luis
2017-10-01
Plasma accelerators arose as potential candidates for future accelerator technology in the last few decades because of its predicted compactness and low cost. One of the proposed designs for plasma accelerators is based on Laser Wakefield Acceleration (LWFA). However, simulations performed for such systems have to solve the laser wavelength which is orders of magnitude lower than the plasma wavelength. In this context, the Ponderomotive Guiding Center (PGC) algorithm for particle-in-cell (PIC) simulations is a potent tool. The laser is approximated by its envelope which leads to a speed-up of around 100 times because the laser wavelength is not solved. The plasma response is well understood, and comparison with the full PIC code show an excellent agreement. However, for LWFA, the convergence of the self-injected beam parameters, such as energy and charge, was not studied before and has vital importance for the use of the algorithm in predicting the beam parameters. Our goal is to do a thorough investigation of the stability and convergence of the algorithm in situations of experimental relevance for LWFA. To this end, we perform simulations using the PGC algorithm implemented in the PIC code OSIRIS. To verify the PGC predictions, we compare the results with full PIC simulations. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No 653782.
Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B.; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain
2017-01-01
Abstract Background: The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Results: Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Conclusions: Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. PMID:28327993
Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain; Jelinsky, Scott A
2017-05-01
The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. © The Author 2017. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A.; Barnard, J.J.; Briggs, R.J.
The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaborationof LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity"tilt" to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of Warm Dense Matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven Inertial Fusion Energy (IFE). Thesemore » goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned Advanced Test Accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates a ~;;30 nC pulse of Li+ ions to ~;;3 MeV, then compresses it to ~;;1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using analysis, an interactive one-dimensional kinetic simulation model, and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.« less
NASA Astrophysics Data System (ADS)
Hodges, M.; Barzilov, A.; Chen, Y.; Lowe, D.
2016-10-01
The bremsstrahlung photon flux from the UNLV particle accelerator (Varian M6 model) was determined using MCNP5 code for 3 MeV and 6 MeV incident electrons. Human biological equivalent dose rates due to accelerator operation were evaluated using the photon flux with the flux-to-dose conversion factors. Dose rates were computed for the accelerator facility for M6 linac use under different operating conditions. The results showed that the use of collimators and linac internal shielding significantly reduced the dose rates throughout the facility. It was shown that the walls of the facility, in addition to the earthen berm enveloping the building, provide equivalent shielding to reduce dose rates outside to below the 2 mrem/h limit.
Innovative HPC architectures for the study of planetary plasma environments
NASA Astrophysics Data System (ADS)
Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni
2016-04-01
DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. The simulations also give access to detailed information about the particle dynamics and their velocity distribution at locations that can be used for comparison with satellite data.
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.
1996-01-01
Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.
Empirical evidence for acceleration-dependent amplification factors
Borcherdt, R.D.
2002-01-01
Site-specific amplification factors, Fa and Fv, used in current U.S. building codes decrease with increasing base acceleration level as implied by the Loma Prieta earthquake at 0.1g and extrapolated using numerical models and laboratory results. The Northridge earthquake recordings of 17 January 1994 and subsequent geotechnical data permit empirical estimates of amplification at base acceleration levels up to 0.5g. Distance measures and normalization procedures used to infer amplification ratios from soil-rock pairs in predetermined azimuth-distance bins significantly influence the dependence of amplification estimates on base acceleration. Factors inferred using a hypocentral distance norm do not show a statistically significant dependence on base acceleration. Factors inferred using norms implied by the attenuation functions of Abrahamson and Silva show a statistically significant decrease with increasing base acceleration. The decrease is statistically more significant for stiff clay and sandy soil (site class D) sites than for stiffer sites underlain by gravely soils and soft rock (site class C). The decrease in amplification with increasing base acceleration is more pronounced for the short-period amplification factor, Fa, than for the midperiod factor, Fv.
Characteristics of four SPE groups with different origins and acceleration processes
NASA Astrophysics Data System (ADS)
Kim, R.-S.; Cho, K.-S.; Lee, J.; Bong, S.-C.; Joshi, A. D.; Park, Y.-D.
2015-09-01
Solar proton events (SPEs) can be categorized into four groups based on their associations with flare or CME inferred from onset timings as well as acceleration patterns using multienergy observations. In this study, we have investigated whether there are any typical characteristics of associated events and acceleration sites in each group using 42 SPEs from 1997 to 2012. We find the following: (i) if the proton acceleration starts from a lower energy, a SPE has a higher chance to be a strong event (> 5000 particle flux per unit (pfu)) even if its associated flare and/or CME are not so strong. The only difference between the SPEs associated with flare and CME is the location of the acceleration site. (ii) For the former (Group A), the sites are very low (˜ 1 Rs) and close to the western limb, while the latter (Group C) have relatively higher (mean = 6.05 Rs) and wider acceleration sites. (iii) When the proton acceleration starts from the higher energy (Group B), a SPE tends to be a relatively weak event (< 1000 pfu), although its associated CME is relatively stronger than previous groups. (iv) The SPEs categorized by the simultaneous acceleration in whole energy range within 10 min (Group D) tend to show the weakest proton flux (mean = 327 pfu) in spite of strong associated eruptions. Based on those results, we suggest that the different characteristics of SPEs are mainly due to the different conditions of magnetic connectivity and particle density, which are changed with longitude and height as well as their origin.
NASA Astrophysics Data System (ADS)
Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao
2017-10-01
UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.
Efficient Modeling of Laser-Plasma Accelerators with INF&RNO
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.
2010-11-01
The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.
Go, Michael R; Masterson, Loren; Veerman, Brent; Satiani, Bhagwan
2016-02-01
To curb increasing volumes of diagnostic imaging and costs, reimbursement for carotid duplex ultrasound (CDU) is dependent on "appropriate" indications as documented by International Classification of Diseases (ICD) codes entered by ordering physicians. Historically, asymptomatic indications for CDU yield lower rates of abnormal results than symptomatic indications, and consensus documents agree that most asymptomatic indications for CDU are inappropriate. In our vascular laboratory, we perceived an increased rate of incorrect or inappropriate ICD codes. We therefore sought to determine if ICD codes were useful in predicting the frequency of abnormal CDU. We hypothesized that asymptomatic or nonspecific ICD codes would yield a lower rate of abnormal CDU than symptomatic codes, validating efforts to limit reimbursement in asymptomatic, low-yield groups. We reviewed all outpatient CDU done in 2011 at our institution. ICD codes were recorded, and each medical record was then reviewed by a vascular surgeon to determine if the assigned ICD code appropriately reflected the clinical scenario. CDU findings categorized as abnormal (>50% stenosis) or normal (<50% stenosis) were recorded. Each individual ICD code and group 1 (asymptomatic), group 2 (nonhemispheric symptoms), group 3 (hemispheric symptoms), group 4 (preoperative cardiovascular examination), and group 5 (nonspecific) ICD codes were analyzed for correlation with CDU results. Nine hundred ninety-four patients had 74 primary ICD codes listed as indications for CDU. Of assigned ICD codes, 17.4% were deemed inaccurate. Overall, 14.8% of CDU were abnormal. Of the 13 highest frequency ICD codes, only 433.10, an asymptomatic code, was associated with abnormal CDU. Four symptomatic codes were associated with normal CDU; none of the other high frequency codes were associated with CDU result. Patients in group 1 (asymptomatic) were significantly more likely to have an abnormal CDU compared to each of the other groups (P < 0.001, P < 0.001, P = 0.020, P = 0.002) and to all other groups combined (P < 0.001). Asymptomatic indications by ICD codes yielded higher rates of abnormal CDU than symptomatic indications. This finding is inconsistent with clinical experience and historical data, and we suggest that inaccurate coding may play a role. Limiting reimbursement for CDU in low-yield groups is reasonable. However, reimbursement policies based on ICD coding, for example, limiting payment for asymptomatic ICD codes, may impede use of CDU in high-yield patient groups. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ditommaso, Rocco; Carlo Ponzo, Felice; Auletta, Gianluca; Iacovino, Chiara; Nigro, Antonella
2015-04-01
Aim of this study is a comparison among the fundamental period of reinforced concrete buildings evaluated using the simplified approach proposed by the Italian Seismic code (NTC 2008), numerical models and real values retrieved from an experimental campaign performed on several buildings located in Basilicata region (Italy). With the intention of proposing simplified relationships to evaluate the fundamental period of reinforced concrete buildings, scientists and engineers performed several numerical and experimental campaigns, on different structures all around the world, to calibrate different kind of formulas. Most of formulas retrieved from both numerical and experimental analyses provides vibration periods smaller than those suggested by the Italian seismic code. However, it is well known that the fundamental period of a structure play a key role in the correct evaluation of the spectral acceleration for seismic static analyses. Generally, simplified approaches impose the use of safety factors greater than those related to in depth nonlinear analyses with the aim to cover possible unexpected uncertainties. Using the simplified formula proposed by the Italian seismic code the fundamental period is quite higher than fundamental periods experimentally evaluated on real structures, with the consequence that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear and nonlinear seismic static analyses. Finally, the authors suggest a possible update of the Italian seismic code formula for the simplified estimation of the fundamental period of vibration of existing RC buildings, taking into account both elastic and inelastic structural behaviour and the interaction between structural and non-structural elements. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2014 - RS4 ''Seismic observatory of structures and health monitoring''. References R. Ditommaso, M. Vona, M. R. Gallipoli and M. Mucciarelli (2013). Evaluation and considerations about fundamental periods of damaged reinforced concrete buildings. Nat. Hazards Earth Syst. Sci., 13, 1903-1912, 2013. www.nat-hazards-earth-syst-sci.net/13/1903/2013. doi:10.5194/nhess-13-1903-2013
Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths
NASA Astrophysics Data System (ADS)
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2017-10-01
Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 <<λp studies. We present the 3d version of a PGC solver in the massively parallel, fully relativistic PIC code OSIRIS. Further, a discussion and characterization of the validity of the PGC solver for injection schemes on the plasma scale lengths, such as down-ramp injection, magnetic injection and ionization injection, through parametric studies, full PIC simulations and theoretical scaling, is presented. This work was partially supported by Fundacao para a Ciencia e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
Activation of accelerator construction materials by heavy ions
NASA Astrophysics Data System (ADS)
Katrík, P.; Mustafin, E.; Hoffmann, D. H. H.; Pavlovič, M.; Strašík, I.
2015-12-01
Activation data for an aluminum target irradiated by 200 MeV/u 238U ion beam are presented in the paper. The target was irradiated in the stacked-foil geometry and analyzed using gamma-ray spectroscopy. The purpose of the experiment was to study the role of primary particles, projectile fragments, and target fragments in the activation process using the depth profiling of residual activity. The study brought information on which particles contribute dominantly to the target activation. The experimental data were compared with the Monte Carlo simulations by the FLUKA 2011.2c.0 code. This study is a part of a research program devoted to activation of accelerator construction materials by high-energy (⩾200 MeV/u) heavy ions at GSI Darmstadt. The experimental data are needed to validate the computer codes used for simulation of interaction of swift heavy ions with matter.
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
Modeling Multi-Bunch X-band Photoinjector Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, R A; Anderson, S G; Gibson, D J
An X-band test station is being developed at LLNL to investigate accelerator optimization for future upgrades to mono-energetic gamma-ray technology at LLNL. The test station will consist of a 5.5 cell X-band rf photoinjector, single accelerator section, and beam diagnostics. Of critical import to the functioning of the LLNL X-band system with multiple electron bunches is the performance of the photoinjector. In depth modeling of the Mark 1 LLNL/SLAC X-band rf photoinjector performance will be presented addressing important challenges that must be addressed in order to fabricate a multi-bunch Mark 2 photoinjector. Emittance performance is evaluated under different nominal electronmore » bunch parameters using electrostatic codes such as PARMELA. Wake potential is analyzed using electromagnetic time domain simulations using the ACE3P code T3P. Plans for multi-bunch experiments and implementation of photoinjector advances for the Mark 2 design will also be discussed.« less
NASA Technical Reports Server (NTRS)
Reddell, Brandon
2015-01-01
Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.
Stokes versus Basset: comparison of forces governing motion of small bodies with high acceleration
NASA Astrophysics Data System (ADS)
Krafcik, A.; Babinec, P.; Frollo, I.
2018-05-01
In this paper, the importance of the forces governing the motion of a millimetre-sized sphere in a viscous fluid has been examined. As has been shown previously, for spheres moving with a high initial acceleration, the Basset history force should be used, as well as the commonly used Stokes force. This paper introduces the concept of history forces, which are almost unknown to students despite their interesting mathematical structure and physical meaning, and shows the implementation of simple and efficient numerical methods as a MATLAB code to simulate the motion of a falling sphere. An important application of this code could be, for example, the simulation of microfluidic systems, where the external forces are very large and the relevant timescale is in the order of milliseconds to seconds, and therefore the Basset history force cannot be neglected.
GPU accelerated implementation of NCI calculations using promolecular density.
Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric
2017-05-30
The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Optimization of equipment for electron radiation processing
NASA Astrophysics Data System (ADS)
Tartz, M.; Hartmann, E.; Lenk, M.; Mehnert, R.
1999-05-01
In the course of the last decade, IOM Leipzig has developed low-energy electron accelerators for electron beam curing of polymer coatings and printing inks. In order to optimize the electron irradiation field, electron optical calculations have been carried out using the commercially available EGUN code. The present study outlines the design of the diode-type low-energy electron accelerators LEA and EBOGEN, taking into account the electron optical effects of secondary components such as the retaining rods installed in the cathode assembly.
Code TESLA for Modeling and Design of High-Power High-Efficiency Klystrons
2011-03-01
CODE TESLA FOR MODELING AND DESIGN OF HIGH - POWER HIGH -EFFICIENCY KLYSTRONS * I.A. Chernyavskiy, SAIC, McLean, VA 22102, U.S.A. S.J. Cooke, B...and multiple-beam klystrons as high - power RF sources. These sources are widely used or proposed to be used in accelerators in the future. Comparison...of TESLA modelling results with experimental data for a few multiple-beam klystrons are shown. INTRODUCTION High - power and high -efficiency
NASA Technical Reports Server (NTRS)
Brieda, Lubos
2015-01-01
This talk presents 3 different tools developed recently for contamination analysis:HTML QCM analyzer: runs in a web browser, and allows for data analysis of QCM log filesJava RGA extractor: can load in multiple SRS.ana files and extract pressure vs. time dataC++ Contamination Simulation code: 3D particle tracing code for modeling transport of dust particulates and molecules. Uses residence time to determine if molecules stick. Particulates can be sampled from IEST-STD-1246 and be accelerated by aerodynamic forces.
Exploring Accelerating Science Applications with FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storaasli, Olaf O; Strenski, Dave
2007-01-01
FPGA hardware and tools (VHDL, Viva, MitrionC and CHiMPS) are described. FPGA performance is evaluated on two Cray XD1 systems (Virtex-II Pro 50 and Virtex-4 LX160) for human genome (DNA and protein) sequence comparisons for a computational biology code (FASTA). Scalable FPGA speedups of 50X (Virtex-II) and 100X (Virtex-4) over a 2.2 GHz Opteron were achieved. Coding and IO issues faced for human genome data are described.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
NASA Technical Reports Server (NTRS)
Klein, K. E.; Backhausen, F.; Bruner, H.; Eichhorn, J.; Jovy, D.; Schotte, J.; Vogt, L.; Wegman, H. M.
1980-01-01
A group of 12 highly trained athletes and a group of 12untrained students were subjected to passive changes of position on a tilt table and positive accelerations in a centrifuge. During a 20 min tilt, including two additional respiratory maneuvers, the number of faints and average cardiovascular responses did not differ significantly between the groups. During linear increase of acceleration, the average blackout level was almost identical in both groups. Statistically significant coefficients of product-moment correlation for various relations were obtained. The coefficient of multiple determination computed for the dependence of acceleration tolerance on heart-eye distance and systolic blood pressure at rest allows the explanation of almost 50% of the variation of acceleration tolerance. The maximum oxygen uptake showed the expected significant correlation to the heart rate at rest, but not the acceleration tolerance, or to the cardiovascular responses to tilting.
Physics and engineering design of the accelerator and electron dump for SPIDER
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.
2011-06-01
The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Wei; Petrosian, Vahe; Mariska, John T.
2009-09-10
Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self-consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated themore » simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a {approx}10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a nonthermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y; Mazur, T; Green, O
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
ActiWiz 3 – an overview of the latest developments and their application
NASA Astrophysics Data System (ADS)
Vincke, H.; Theis, C.
2018-06-01
In 2011 the ActiWiz code was developed at CERN in order to optimize the choice of materials for accelerator equipment from a radiological point of view. Since then the code has been extended to allow for calculating complete nuclide inventories and provide evaluations with respect to radiotoxicity, inhalation doses, etc. Until now the software included only pre-defined radiation environments for CERN’s high-energy proton accelerators which were based on FLUKA Monte Carlo calculations. Eventually the decision was taken to invest into a major revamping of the code. Starting with version 3 the software is not limited anymore to pre-defined radiation fields but within a few seconds it can also treat arbitrary environments of which fluence spectra are available. This has become possible due to the use of ~100 CPU years’ worth of FLUKA Monte Carlo simulations as well as the JEFF cross-section library for neutrons < 20 MeV. Eventually the latest code version allowed for the efficient inclusion of 42 additional radiation environments of the LHC experiments as well as considerably more flexibility in view of characterizing also waste from CERN’s Large Electron Positron collider (LEP). New fully integrated analysis functionalities like automatic evaluation of difficult-to-measure nuclides, rapid assessment of the temporal evolution of quantities like radiotoxicity or dose-rates, etc. make the software a powerful tool for characterization complementary to general purpose MC codes like FLUKA. In this paper an overview of the capabilities will be given using recent examples from the domain of waste characterization as well as operational radiation protection.
A redshift survey of IRAS galaxies. V - The acceleration on the Local Group
NASA Technical Reports Server (NTRS)
Strauss, Michael A.; Yahil, Amos; Davis, Marc; Huchra, John P.; Fisher, Karl
1992-01-01
The acceleration on the Local Group is calculated based on a full-sky redshift survey of 5288 galaxies detected by IRAS. A formalism is developed to compute the distribution function of the IRAS acceleration for a given power spectrum of initial perturbations. The computed acceleration on the Local Group points 18-28 deg from the direction of the Local Group peculiar velocity vector. The data suggest that the CMB dipole is indeed due to the motion of the Local Group, that this motion is gravitationally induced, and that the distribution of IRAS galaxies on large scales is related to that of dark matter by a simple linear biasing model.
NASA Astrophysics Data System (ADS)
Coindreau, O.; Duriez, C.; Ederli, S.
2010-10-01
Progress in the treatment of air oxidation of zirconium in severe accident (SA) codes are required for a reliable analysis of severe accidents involving air ingress. Air oxidation of zirconium can actually lead to accelerated core degradation and increased fission product release, especially for the highly-radiotoxic ruthenium. This paper presents a model to simulate air oxidation kinetics of Zircaloy-4 in the 600-1000 °C temperature range. It is based on available experimental data, including separate-effect experiments performed at IRSN and at Forschungszentrum Karlsruhe. The kinetic transition, named "breakaway", from a diffusion-controlled regime to an accelerated oxidation is taken into account in the modeling via a critical mass gain parameter. The progressive propagation of the locally initiated breakaway is modeled by a linear increase in oxidation rate with time. Finally, when breakaway propagation is completed, the oxidation rate stabilizes and the kinetics is modeled by a linear law. This new modeling is integrated in the severe accident code ASTEC, jointly developed by IRSN and GRS. Model predictions and experimental data from thermogravimetric results show good agreement for different air flow rates and for slow temperature transient conditions.
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Feofilov, A. G.; Manuilova, R. O.; Yankovsky, V. A.; Rezac, L.; Pesnell, W. D.; Goldberg, R. A.
2008-01-01
The Accelerated Lambda Iteration (ALI) technique was developed in stellar astrophysics at the beginning of 1990s for solving the non-LTE radiative transfer problem in atomic lines and multiplets in stellar atmospheres. It was later successfully applied to modeling the non-LTE emissions and radiative cooling/heating in the vibrational-rotational bands of molecules in planetary atmospheres. Similar to the standard lambda iterations ALI operates with the matrices of minimal dimension. However, it provides higher convergence rate and stability due to removing from the iterating process the photons trapped in the optically thick line cores. In the current ALI-ARMS (ALI for Atmospheric Radiation and Molecular Spectra) code version additional acceleration of calculations is provided by utilizing the opacity distribution function (ODF) approach and "decoupling". The former allows replacing the band branches by single lines of special shape, whereas the latter treats non-linearity caused by strong near-resonant vibration-vibrational level coupling without additional linearizing the statistical equilibrium equations. Latest code application for the non-LTE diagnostics of the molecular band emissions of Earth's and Martian atmospheres as well as for the non-LTE IR cooling/heating calculations are discussed.
Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao
2014-01-01
An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.
Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel
2014-01-01
High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by 241Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy-1 for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy-1 achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production. PMID:24600167
Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel
2014-01-01
High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by (241)Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy(-1) for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy(-1) achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Bunch modulation in LWFA blowout regime
NASA Astrophysics Data System (ADS)
Vyskočil, Jiří; Klimo, Ondřej; Vieira, Jorge; Korn, Georg
2015-05-01
Laser wakefield acceleration (LWFA) is able to produce high quality electron bunches interesting for many applications ranging from coherent light sources to high energy physics. The blow-out regime of LWFA provides excellent accelerating structure able to maintain small transverse emittance and energy spread of the accelerating electron beam if combined with localised injection. A modulation of the back of a self-injected electron bunch in the blowout regime of Laser Wakefield Acceleration appears 3D Particle-in-Cell simulations with the code OSIRIS. The shape of the modulation is connected to the polarization of the driving laser pulse, although the wavelength of the modulation is longer than that of the pulse. Nevertheless a circularly polarized laser pulse leads to a corkscrew-like modulation, while in the case of linear polarization, the modulation lies in the polarization plane.
Solar Wind Acceleration: Modeling Effects of Turbulent Heating in Open Flux Tubes
NASA Astrophysics Data System (ADS)
Woolsey, Lauren N.; Cranmer, Steven R.
2014-06-01
We present two self-consistent coronal heating models that determine the properties of the solar wind generated and accelerated in magnetic field geometries that are open to the heliosphere. These models require only the radial magnetic field profile as input. The first code, ZEPHYR (Cranmer et al. 2007) is a 1D MHD code that includes the effects of turbulent heating created by counter-propagating Alfven waves rather than relying on empirical heating functions. We present the analysis of a large grid of modeled flux tubes (> 400) and the resulting solar wind properties. From the models and results, we recreate the observed anti-correlation between wind speed at 1 AU and the so-called expansion factor, a parameterization of the magnetic field profile. We also find that our models follow the same observationally-derived relation between temperature at 1 AU and wind speed at 1 AU. We continue our analysis with a newly-developed code written in Python called TEMPEST (The Efficient Modified-Parker-Equation-Solving Tool) that runs an order of magnitude faster than ZEPHYR due to a set of simplifying relations between the input magnetic field profile and the temperature and wave reflection coefficient profiles. We present these simplifying relations as a useful result in themselves as well as the anti-correlation between wind speed and expansion factor also found with TEMPEST. Due to the nature of the algorithm TEMPEST utilizes to find solar wind solutions, we can effectively separate the two primary ways in which Alfven waves contribute to solar wind acceleration: 1) heating the surrounding gas through a turbulent cascade and 2) providing a separate source of wave pressure. We intend to make TEMPEST easily available to the public and suggest that TEMPEST can be used as a valuable tool in the forecasting of space weather, either as a stand-alone code or within an existing modeling framework.
Schütz, U; Reichel, H; Dreinhöfer, K
2007-01-01
We introduce a grouping system for clinical practice which allows the separation of DRG coding in specific orthopaedic groups based on anatomic regions, operative procedures, therapeutic interventions and morbidity equivalent diagnosis groups. With this, a differentiated aim-oriented analysis of illustrated internal DRG data becomes possible. The group-specific difference of the coding quality between the DRG groups following primary coding by the orthopaedic surgeon and final coding by the medical controlling is analysed. In a consecutive series of 1600 patients parallel documentation and group-specific comparison of the relevant DRG parameters were carried out in every case after primary and final coding. Analysing the group-specific share in the additional CaseMix coding, the group "spine surgery" dominated, closely followed by the groups "arthroplasty" and "surgery due to infection, tumours, diabetes". Altogether, additional cost-weight-relevant coding was necessary most frequently in the latter group (84%), followed by group "spine surgery" (65%). In DRGs representing conservative orthopaedic treatment documented procedures had nearly no influence on the cost weight. The introduced system of case group analysis in internal DRG documentation can lead to the detection of specific problems in primary coding and cost-weight relevant changes of the case mix. As an instrument for internal process control in the orthopaedic field, it can serve as a communicative interface between an economically oriented classification of the hospital performance and a specific problem solution of the medical staff involved in the department management.
Homing endonucleases from mobile group I introns: discovery to genome engineering
2014-01-01
Homing endonucleases are highly specific DNA cleaving enzymes that are encoded within genomes of all forms of microbial life including phage and eukaryotic organelles. These proteins drive the mobility and persistence of their own reading frames. The genes that encode homing endonucleases are often embedded within self-splicing elements such as group I introns, group II introns and inteins. This combination of molecular functions is mutually advantageous: the endonuclease activity allows surrounding introns and inteins to act as invasive DNA elements, while the splicing activity allows the endonuclease gene to invade a coding sequence without disrupting its product. Crystallographic analyses of representatives from all known homing endonuclease families have illustrated both their mechanisms of action and their evolutionary relationships to a wide range of host proteins. Several homing endonucleases have been completely redesigned and used for a variety of genome engineering applications. Recent efforts to augment homing endonucleases with auxiliary DNA recognition elements and/or nucleic acid processing factors has further accelerated their use for applications that demand exceptionally high specificity and activity. PMID:24589358
Practice Location Characteristics of Non-Traditional Dental Practices.
Solomon, Eric S; Jones, Daniel L
2016-04-01
Current and future dental school graduates are increasingly likely to choose a non-traditional dental practice-a group practice managed by a dental service organization or a corporate practice with employed dentists-for their initial practice experience. In addition, the growth of non-traditional practices, which are located primarily in major urban areas, could accelerate the movement of dentists to those areas and contribute to geographic disparities in the distribution of dental services. To help the profession understand the implications of these developments, the aim of this study was to compare the location characteristics of non-traditional practices and traditional dental practices. After identifying non-traditional practices across the United States, the authors located those practices and traditional dental practices geographically by zip code. Non-traditional dental practices were found to represent about 3.1% of all dental practices, but they had a greater impact on the marketplace with almost twice the average number of staff and annual revenue. Virtually all non-traditional dental practices were located in zip codes that also had a traditional dental practice. Zip codes with non-traditional practices had significant differences from zip codes with only a traditional dental practice: the populations in areas with non-traditional practices had higher income levels and higher education and were slightly younger and proportionally more Hispanic; those practices also had a much higher likelihood of being located in a major metropolitan area. Dental educators and leaders need to understand the impact of these trends in the practice environment in order to both prepare graduates for practice and make decisions about planning for the workforce of the future.
Noninvasive acceleration measurements to characterize knee arthritis and chondromalacia.
Reddy, N P; Rothschild, B M; Mandal, M; Gupta, V; Suryanarayanan, S
1995-01-01
Devising techniques and instrumentation for early detection of knee arthritis and chondromalacia presents a challenge in the domain of biomedical engineering. The purpose of the present investigation was to characterize normal knees and knees affected by osteoarthritis, rheumatoid arthritis, and chondromalacia using a set of noninvasive acceleration measurements. Ultraminiature accelerometers were placed on the skin over the patella in four groups of subjects, and acceleration measurements were obtained during leg rotation. Acceleration measurements were significantly different in the four groups of subjects in the time and frequency domains. Power spectral analysis revealed that the average power was significantly different for these groups over a 100-500 Hz range. Noninvasive acceleration measurements can characterize the normal, arthritis, and chondromalacia knees. However, a study on a larger group of subjects is indicated.
NASA Astrophysics Data System (ADS)
Zulick, Calvin Andrew
The development of short pulse high power lasers has led to interest in laser based particle accelerators. Laser produced plasmas have been shown to support quasi-static TeV/m acceleration gradients which are more than four orders of magnitude stronger than conventional accelerators. These high gradients have the potential to allow compact particle accelerators for active interrogation of nuclear material. In order to better understand this application, several experiments have been conducted at the HERCULES and Lambda Cubed lasers as the Center for Ultrafast Optical Science at the University of Michigan. Electron acceleration and bremsstrahlung generation were studied on the Lambda Cubed laser. The scaling of the intensity, angular, and material dependence of bremsstrahlung radiation from an intense (I > 10 18 W/cm2 ) laser-solid interaction has been characterized at energies between 100 keV and 1 MeV. These were the first high resolution (lambda / d lambda > 100) measurements of bremsstrahlung photons from a relativistic laser plasma interaction. The electron populations and bremsstrahlung temperatures were modeled in the particle-in-cell code OSIRIS and the Monte Carlo code MCNPX and were in good agreement with the experimental results. Proton acceleration was studied on the HERCULES laser. The effect of three dimensional perturbations of electron sheaths on proton acceleration was investigated through the use of foil, grid, and wire targets. Hot electron density, as measured with an imaging Cu Kalpha crystal, increased as the target surface area was reduced and was correlated to an increase in the temperature of the accelerated proton beam. Additionally, experiments at the HERCULES laser facility have produced directional neutron beams with energies up to 16.8 (+/-0.3) MeV using (d,n) and (p,n) reactions. Efficient (d,n) reactions required the selective acceleration of deuterons through the introduction of a deuterated plastic or cryogenically frozen D2O layer on the surface of a thin film target. The measured neutron yield was up to 1.0 (+/-0.5) x 107 neutrons/sr with a flux 6.2 (+/-3.7) times higher in the forward direction than at 90 degrees . This demonstrated that femtosecond lasers are capable of providing a time averaged neutron flux equivalent to commercial DD generators with the advantage of a directional beam with picosecond bunch duration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A; Barnard, J J; Briggs, R J
The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaboration of LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity 'tilt' to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of warm dense matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven inertial fusion energymore » (IFE). These goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned advanced test accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates an {approx} 30 nC pulse of Li{sup +} ions to {approx} 3 MeV, then compresses it to {approx} 1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using an interactive one-dimensional kinetic simulation model and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.« less
Quantum mechanics in noninertial reference frames: Relativistic accelerations and fictitious forces
NASA Astrophysics Data System (ADS)
Klink, W. H.; Wickramasekara, S.
2016-06-01
One-particle systems in relativistically accelerating reference frames can be associated with a class of unitary representations of the group of arbitrary coordinate transformations, an extension of the Wigner-Bargmann definition of particles as the physical realization of unitary irreducible representations of the Poincaré group. Representations of the group of arbitrary coordinate transformations become necessary to define unitary operators implementing relativistic acceleration transformations in quantum theory because, unlike in the Galilean case, the relativistic acceleration transformations do not themselves form a group. The momentum operators that follow from these representations show how the fictitious forces in noninertial reference frames are generated in quantum theory.
GPU-accelerated simulations of isolated black holes
NASA Astrophysics Data System (ADS)
Lewis, Adam G. M.; Pfeiffer, Harald P.
2018-05-01
We present a port of the numerical relativity code SpEC which is capable of running on NVIDIA GPUs. Since this code must be maintained in parallel with SpEC itself, a primary design consideration is to perform as few explicit code changes as possible. We therefore rely on a hierarchy of automated porting strategies. At the highest level we use TLoops, a C++ library of our design, to automatically emit CUDA code equivalent to tensorial expressions written into C++ source using a syntax similar to analytic calculation. Next, we trace out and cache explicit matrix representations of the numerous linear transformations in the SpEC code, which allows these to be performed on the GPU using pre-existing matrix-multiplication libraries. We port the few remaining important modules by hand. In this paper we detail the specifics of our port, and present benchmarks of it simulating isolated black hole spacetimes on several generations of NVIDIA GPU.
Efficient Modeling of Laser-Plasma Accelerators with INF and RNO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Esarey, E.
2010-11-04
The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less
A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code
NASA Technical Reports Server (NTRS)
Chen, H. C.; Su, T. Y.; Kao, T. J.
1991-01-01
This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.
NORTICA—a new code for cyclotron analysis
NASA Astrophysics Data System (ADS)
Gorelov, D.; Johnson, D.; Marti, F.
2001-12-01
The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.
Shielding calculations for industrial 5/7.5MeV electron accelerators using the MCNP Monte Carlo Code
NASA Astrophysics Data System (ADS)
Peri, Eyal; Orion, Itzhak
2017-09-01
High energy X-rays from accelerators are used to irradiate food ingredients to prevent growth and development of unwanted biological organisms in food, and by that extend the shelf life of the products. The production of X-rays is done by accelerating 5 MeV electrons and bombarding them into a heavy target (high Z). Since 2004, the FDA has approved using 7.5 MeV energy, providing higher production rates with lower treatments costs. In this study we calculated all the essential data needed for a straightforward concrete shielding design of typical food accelerator rooms. The following evaluation is done using the MCNP Monte Carlo code system: (1) Angular dependence (0-180°) of photon dose rate for 5 MeV and 7.5 MeV electron beams bombarding iron, aluminum, gold, tantalum, and tungsten targets. (2) Angular dependence (0-180°) spectral distribution simulations of bremsstrahlung for gold, tantalum, and tungsten bombarded by 5 MeV and 7.5 MeV electron beams. (3) Concrete attenuation calculations in several photon emission angles for the 5 MeV and 7.5 MeV electron beams bombarding a tantalum target. Based on the simulation, we calculated the expected increase in dose rate for facilities intending to increase the energy from 5 MeV to 7.5 MeV, and the concrete width needed to be added in order to keep the existing dose rate unchanged.
Three-dimensional structural analysis using interactive graphics
NASA Technical Reports Server (NTRS)
Biffle, J.; Sumlin, H. A.
1975-01-01
The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.
Plasma Heating Simulation in the VASIMR System
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; ChangDiaz, Franklin R.; Squire, Jared P.; Carter, Mark D.
2005-01-01
The paper describes the recent development in the simulation of the ion-cyclotron acceleration of the plasma in the VASIMR experiment. The modeling is done using an improved EMIR code for RF field calculation together with particle trajectory code for plasma transport calculat ion. The simulation results correlate with experimental data on the p lasma loading and predict higher ICRH performance for a higher density plasma target. These simulations assist in optimizing the ICRF anten na so as to achieve higher VASIMR efficiency.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Genetics of Congenital Heart Disease: Past and Present.
Muntean, Iolanda; Togănel, Rodica; Benedek, Theodora
2017-04-01
Congenital heart disease is the most common congenital anomaly, representing an important cause of infant morbidity and mortality. Congenital heart disease represents a group of heart anomalies that include septal defects, valve defects, and outflow tract anomalies. The exact genetic, epigenetic, or environmental basis of congenital heart disease remains poorly understood, although the exact mechanism is likely multifactorial. However, the development of new technologies including copy number variants, single-nucleotide polymorphism, next-generation sequencing are accelerating the detection of genetic causes of heart anomalies. Recent studies suggest a role of small non-coding RNAs, micro RNA, in congenital heart disease. The recently described epigenetic factors have also been found to contribute to cardiac morphogenesis. In this review, we present past and recent genetic discoveries in congenital heart disease.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Incorporation of Dynamic SSI Effects in the Design Response Spectra
NASA Astrophysics Data System (ADS)
Manjula, N. K.; Pillai, T. M. Madhavan; Nagarajan, Praveen; Reshma, K. K.
2018-05-01
Many studies in the past on dynamic soil-structure interactions have revealed the detrimental and advantageous effects of soil flexibility. Based on such studies, the design response spectra of international seismic codes are being improved worldwide. The improvements required for the short period range of the design response spectra in the Indian seismic code (IS 1893:2002) are presented in this paper. As the recent code revisions has not incorporated the short period amplifications, proposals given in this paper are equally applicable for the latest code also (IS 1893:2016). Analyses of single degree of freedom systems are performed to predict the required improvements. The proposed modifications to the constant acceleration portion of the spectra are evaluated with respect to the current design spectra in Eurocode 8.
Compact torus accelerator as a driver for ICF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, M.T.; Meier, W.R.; Morse, E.C.
1986-01-01
The authors have carried out further investigations of the technical issues associated with using a compact torus (CT) accelerator as a driver for inertial confinement fusion (ICF). In a CT accelerator, a magnetically confined, torus-shaped plasma is compressed, accelerated, and focused by two concentric electrodes. After its initial formation, the torus shape is maintained for lifetimes exceeding 1 ms by inherent poloidal and toroidal currents. Hartman suggests acceleration and focusing of such a plasma ring will not cause dissolution within certain constraints. In this study, we evaluated a point design based on an available capacitor bank energy of 9.2 MJ.more » This accelerator, which was modeled by a zero-dimensional code, produces a xenon plasma ring with a 0.73-cm radius, a velocity of 4.14 x 10/sup 9/ cm/s, and a mass of 4.42 ..mu..g. The energy of the plasma ring as it leaves the accelerator is 3.8 MJ, or 41% of the capacitor bank energy. Our studies confirm the feasibility of producing a plasma ring with the characteristics required to induce fusion in an ICF target with a gain greater than 50. The low cost and high efficiency of the CT accelerator are particularly attractive. Uncertainties concerning propagation, accelerator lifetime, and power supply must be resolved to establish the viability of the accelerator as an ICF driver.« less
Particle acceleration at a reconnecting magnetic separator
NASA Astrophysics Data System (ADS)
Threlfall, J.; Neukirch, T.; Parnell, C. E.; Eradat Oskoui, S.
2015-02-01
Context. While the exact acceleration mechanism of energetic particles during solar flares is (as yet) unknown, magnetic reconnection plays a key role both in the release of stored magnetic energy of the solar corona and the magnetic restructuring during a flare. Recent work has shown that special field lines, called separators, are common sites of reconnection in 3D numerical experiments. To date, 3D separator reconnection sites have received little attention as particle accelerators. Aims: We investigate the effectiveness of separator reconnection as a particle acceleration mechanism for electrons and protons. Methods: We study the particle acceleration using a relativistic guiding-centre particle code in a time-dependent kinematic model of magnetic reconnection at a separator. Results: The effect upon particle behaviour of initial position, pitch angle, and initial kinetic energy are examined in detail, both for specific (single) particle examples and for large distributions of initial conditions. The separator reconnection model contains several free parameters, and we study the effect of changing these parameters upon particle acceleration, in particular in view of the final particle energy ranges that agree with observed energy spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1993-07-01
The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of themore » physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.« less
Particle Acceleration, Magnetic Field Generation in Relativistic Shocks
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-Ichi; Hardee, P.; Hededal, C. B.; Richardson, G.; Sol, H.; Preece, R.; Fishman, G. J.
2005-01-01
Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.
Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Shocks
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-IchiI.; Hededal, C.; Hardee, P.; Richardson, G.; Preece, R.; Sol, H.; Fishman, G.
2004-01-01
Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (m) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.
Zhang, Xiaoyu; Sun, Ling; Shen, Yang; Tian, Mi; Zhao, Jing; Zhao, Yu; Li, Meiyan; Zhou, Xingtao
2017-07-01
This study aimed to compare the biomechanical and histopathologic effects of transepithelial and accelerated epithelium-off pulsed-light accelerated corneal collagen cross-linking (CXL). A total of 24 New Zealand rabbits were analyzed after sham operation (control) or transepithelial or epithelium-off operation (45 mW/cm for both). The transepithelial group was treated with pulsed-light ultraviolet A for 5 minutes 20 seconds, and the epithelium-off group was treated for 90 seconds. Biomechanical testing, including ultimate stress, Young modulus, and the physiological modulus, was analyzed. Histological changes were evaluated by light microscopy and transmission electron microscopy. The stress-strain curve was nonlinear in both accelerated transepithelial and epithelium-off CXL groups. The stress and elastic moduli were all significantly higher in both experimental groups compared with the control group (P < 0.05), whereas there were no significant differences between the 2 treatment groups (P > 0.05). Six months after the operation, hematoxylin and eosin staining and transmission electron microscopy showed that the subcutaneous collagen fibers were arranged in a regular pattern, and the fiber density was higher in the experimental groups. Both transepithelial and accelerated epithelium-off CXL produced biomechanical and histopathologic improvements, which were not significantly different between the 2 pulsed-light accelerated CXL treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, R. J.; Kozlovsky, B.; Share, G. H., E-mail: murphy@ssd5.nrl.navy.mil, E-mail: benz@wise.tau.ac.il, E-mail: share@astro.umd.edu
2016-12-20
The {sup 3}He abundance in impulsive solar energetic particle (SEP) events is enhanced up to several orders of magnitude compared to its photospheric value of [{sup 3}He]/[{sup 4}He] = 1–3 × 10{sup −4}. Interplanetary magnetic field and timing observations suggest that these events are related to solar flares. Observations of {sup 3}He in flare-accelerated ions would clarify the relationship between these two phenomena. Energetic {sup 3}He interactions in the solar atmosphere produce gamma-ray nuclear-deexcitation lines, both lines that are also produced by protons and α particles and lines that are essentially unique to {sup 3}He. Gamma-ray spectroscopy can, therefore, reveal enhanced levelsmore » of accelerated {sup 3}He. In this paper, we identify all significant deexcitation lines produced by {sup 3}He interactions in the solar atmosphere. We evaluate their production cross sections and incorporate them into our nuclear deexcitation-line code. We find that enhanced {sup 3}He can affect the entire gamma-ray spectrum. We identify gamma-ray line features for which the yield ratios depend dramatically on the {sup 3}He abundance. We determine the accelerated {sup 3}He/ α ratio by comparing these ratios with flux ratios measured previously from the gamma-ray spectrum obtained by summing the 19 strongest flares observed with the Solar Maximum Mission Gamma-Ray Spectrometer. All six flux ratios investigated show enhanced {sup 3}He, confirming earlier suggestions. The {sup 3}He/ α weighted mean of these new measurements ranges from 0.05 to 0.3 (depending on the assumed accelerated α /proton ratio) and has a <1 × 10{sup −3} probability of being consistent with the photospheric value. With the improved code, we can now exploit the full potential of gamma-ray spectroscopy to establish the relationship between flare-accelerated ions and {sup 3}He-rich SEPs.« less
MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices
NASA Astrophysics Data System (ADS)
Shasharina, Svetlana G.; Cary, John R.
1997-05-01
MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.
Optical manipulation for optogenetics: otoliths manipulation in zebrafish (Conference Presentation)
NASA Astrophysics Data System (ADS)
Favre-Bulle, Itia A.; Scott, Ethan; Rubinsztein-Dunlop, Halina
2016-03-01
Otoliths play an important role in Zebrafish in terms of hearing and sense of balance. Many studies have been conducted to understand its structure and function, however the encoding of its movement in the brain remains unknown. Here we developed a noninvasive system capable of manipulating the otolith using optical trapping while we image its behavioral response and brain activity. We'll also present our tools for behavioral response detection and brain activity mapping. Acceleration is sensed through movements of the otoliths in the inner ear. Because experimental manipulations involve movements, electrophysiology and fluorescence microscopy are difficult. As a result, the neural codes underlying acceleration sensation are poorly understood. We have developed a technique for optically trapping otoliths, allowing us to simulate acceleration in stationary larval zebrafish. By applying forces to the otoliths, we can elicit behavioral responses consistent with compensation for perceived acceleration. Since the animal is stationary, we can use calcium imaging in these animals' brains to identify the functional circuits responsible for mediating responses to acceleration in natural settings.
Particle acceleration, magnetic field generation, and emission in relativistic pair jets
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C.; Kouveliotou, C.; Fishman, G. J.; Mizuno, Y.
2005-01-01
Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Recent simulations show that the Weibel instability created by relativistic pair jets is responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet propagating through an ambient plasma with and without initial magnetic fields. The growth rates of the Weibel instability depends on the distribution of pair jets. The Weibel instability created in the collisionless shock accelerates particles perpendicular and parallel to the jet propagation direction. This instability is also responsible for generating and amplifying highly nonuniform, small-scale magnetic fields, which contribute to the electron s transverse deflection behind the jet head. The jitter radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.
Unsteady Aerodynamic Force Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2016-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm. A cantilevered rectangular wing built and tested at the NASA Langley Research Center (Hampton, Virginia, USA) in 1959 is used to validate the simple approach. Unsteady aerodynamic forces as well as wing deflections, velocities, accelerations, and strains are computed using the CFL3D computational fluid dynamics (CFD) code and an MSC/NASTRAN code (MSC Software Corporation, Newport Beach, California, USA), and these CFL3D-based results are assumed as measured quantities. Based on the measured strains, wing deflections, velocities, accelerations, and aerodynamic forces are computed using the proposed approach. These computed deflections, velocities, accelerations, and unsteady aerodynamic forces are compared with the CFL3D/NASTRAN-based results. In general, computed aerodynamic forces based on the lifting surface theory in subsonic speeds are in good agreement with the target aerodynamic forces generated using CFL3D code with the Euler equation. Excellent aeroelastic responses are obtained even with unsteady strain data under the signal to noise ratio of -9.8dB. The deflections, velocities, and accelerations at each sensor location are independent of structural and aerodynamic models. Therefore, the distributed strain data together with the current proposed approaches can be used as distributed deflection, velocity, and acceleration sensors. This research demonstrates the feasibility of obtaining induced drag and lift forces through the use of distributed sensor technology with measured strain data. An active induced drag control system thus can be designed using the two computed aerodynamic forces, induced drag and lift, to improve the fuel efficiency of an aircraft. Interpolation elements between structural finite element grids and the CFD grids and centroids are successfully incorporated with the unsteady aeroelastic computation scheme. The most critical technology for the success of the proposed approach is the robust on-line parameter estimator, since the least-squares curve fitting method depends heavily on aeroelastic system frequencies and damping factors.
2014-06-01
C. MODTRAN ....................................................................................................34 1. Preset Atmospheric Models ...37 3. Aerosol Models ...................................................................................38 4. Cloud and Rain Models ...52 E. MODEL VALIDATION ...............................................................................53 VI. RESULTS
Accelerant-related burns and drug abuse: Challenging combination.
Leung, Leslie T F; Papp, Anthony
2018-05-01
Accelerants are flammable substances that may cause explosion when added to existing fires. The relationships between drug abuse and accelerant-related burns are not well elucidated in the literature. Of these burns, a portion is related to drug manufacturing, which have been shown to be associated with increased burn complications. 1) To evaluate the demographics and clinical outcomes of accelerant-related burns in a Provincial Burn Centre. 2) To compare the clinical outcomes with a control group of non-accelerant related burns. 3) To analyze a subgroup of patients with history of drug abuse and drug manufacturing. Retrospective case control study. Patient data associated with accelerant-related burns from 2009 to 2014 were obtained from the British Columbia Burn Registry. These patients were compared with a control group of non-accelerant related burns. Clinical outcomes that were evaluated include inhalational injury, ICU length of stay, ventilator support, surgeries needed, and burn complications. Chi-square test was used to evaluate categorical data and Student's t-test was used to evaluate mean quantitative data with the p value set at 0.05. A logistic regression model was used to evaluate factors affecting burn complications. Accelerant-related burns represented 28.2% of all burn admissions (N=532) from 2009 to 2014. The accelerant group had higher percentage of patients with history of drug abuse and was associated with higher TBSA burns, ventilator support, ICU stay and pneumonia rates compared to the non-accelerant group. Within the accelerant group, there was no difference in clinical outcomes amongst people with or without history of drug abuse. Four cases were associated with methamphetamine manufacturing, all of which underwent ICU stay and ventilator support. Accelerant-related burns cause significant burden to the burn center. A significant proportion of these patients have history of drug abuse. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.
High Energy Density Physics and Exotic Acceleration Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowan, T.; /General Atomics, San Diego; Colby, E.
2005-09-27
The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And wemore » saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is proving to be a very important field for diverse applications such as muon cooling, fusion energy research, and ultra-bright particle and radiation generation with high intensity lasers. We had several talks on these and other subjects, and many joint sessions with the Computational group, the EM Structures group, and the Beam Generation group. We summarize our groups' work in the following categories: vacuum acceleration schemes; ion acceleration; particle transport in solids; and applications to high energy density phenomena.« less
NASA Astrophysics Data System (ADS)
Al-Chalabi, Rifat M. Khalil
1997-09-01
Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)
Are Registration of Disease Codes for Adult Anaphylaxis Accurate in the Emergency Department?
Choi, Byungho; Lee, Hyeji
2018-01-01
Purpose There has been active research on anaphylaxis, but many study subjects are limited to patients registered with anaphylaxis codes. However, anaphylaxis codes tend to be underused. The aim of this study was to investigate the accuracy of anaphylaxis code registration and the clinical characteristics of accurate and inaccurate anaphylaxis registration in anaphylactic patients. Methods This retrospective study evaluated the medical records of adult patients who visited the university hospital emergency department between 2012 and 2016. The study subjects were divided into the groups with accurate and inaccurate anaphylaxis codes registered under anaphylaxis and other allergy-related codes and symptom-related codes, respectively. Results Among 211,486 patients, 618 (0.29%) had anaphylaxis. Of these, 161 and 457 were assigned to the accurate and inaccurate coding groups, respectively. The average age, transportation to the emergency department, past anaphylaxis history, cancer history, and the cause of anaphylaxis differed between the 2 groups. Cutaneous symptom manifested more frequently in the inaccurate coding group, while cardiovascular and neurologic symptoms were more frequently observed in the accurate group. Severe symptoms and non-alert consciousness were more common in the accurate group. Oxygen supply, intubation, and epinephrine were more commonly used as treatments for anaphylaxis in the accurate group. Anaphylactic patients with cardiovascular symptoms, severe symptoms, and epinephrine use were more likely to be accurately registered with anaphylaxis disease codes. Conclusions In case of anaphylaxis, more patients were registered inaccurately under other allergy-related codes and symptom-related codes rather than accurately under anaphylaxis disease codes. Cardiovascular symptoms, severe symptoms, and epinephrine treatment were factors associated with accurate registration with anaphylaxis disease codes in patients with anaphylaxis. PMID:29411554
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mokhov, Nikolai
MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less
NASA Technical Reports Server (NTRS)
Nishikawa, K.; Hardee, P. E.; Richardson, G. A.; Preece, R. D.; Sol, H.; Fishman, G. J.
2003-01-01
Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. While some Fermi acceleration may occur at the jet front, the majority of electron acceleration takes place behind the jet front and cannot be characterized as Fermi acceleration. The simulation results show that this instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields, which contribute to the electron s transverse deflection behind the jet head. The "jitter" radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.
Numerical simulations of the superdetonative ram accelerator combusting flow field
NASA Technical Reports Server (NTRS)
Soetrisno, Moeljo; Imlay, Scott T.; Roberts, Donald W.
1993-01-01
The effects of projectile canting and fins on the ram accelerator combusting flowfield and the possible cause of the ram accelerator unstart are investigated by performing axisymmetric, two-dimensional, and three-dimensional calculations. Calculations are performed using the INCA code for solving Navier-Stokes equations and a guasi-global combustion model of Westbrook and Dryer (1981, 1984), which includes N2 and nine reacting species (CH4, CO, CO2, H2, H, O2, O, OH, and H2O), which are allowed to undergo a 12-step reaction. It is found that, without canting, interactions between the fins, boundary layers, and combustion fronts are insufficient to unstart the projectile at superdetonative velocities. With canting, the projectile will unstart at flow conditions where it appears to accelerate without canting. Unstart occurs at some critical canting angle. It is also found that three-dimensionality plays an important role in the overall combustion process.
State of the art in electromagnetic modeling for the Compact Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Kabel, Andreas; Lee, Lie-Quan
SLAC's Advanced Computations Department (ACD) has developed the parallel 3D electromagnetic time-domain code T3P for simulations of wakefields and transients in complex accelerator structures. T3P is based on state-of-the-art Finite Element methods on unstructured grids and features unconditional stability, quadratic surface approximation and up to 6th-order vector basis functions for unprecedented simulation accuracy. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with fast turn-around times, aiding the design of the next generation of accelerator facilities. Applications include simulations of the proposed two-beam accelerator structures for the Compact Linear Collider (CLIC) - wakefieldmore » damping in the Power Extraction and Transfer Structure (PETS) and power transfer to the main beam accelerating structures are investigated.« less
Quantum mechanics in noninertial reference frames: Relativistic accelerations and fictitious forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klink, W.H., E-mail: william-klink@uiowa.edu; Wickramasekara, S., E-mail: wickrama@grinnell.edu
2016-06-15
One-particle systems in relativistically accelerating reference frames can be associated with a class of unitary representations of the group of arbitrary coordinate transformations, an extension of the Wigner–Bargmann definition of particles as the physical realization of unitary irreducible representations of the Poincaré group. Representations of the group of arbitrary coordinate transformations become necessary to define unitary operators implementing relativistic acceleration transformations in quantum theory because, unlike in the Galilean case, the relativistic acceleration transformations do not themselves form a group. The momentum operators that follow from these representations show how the fictitious forces in noninertial reference frames are generated inmore » quantum theory.« less
Inductive and electrostatic acceleration in relativistic jet-plasma interactions.
Ng, Johnny S T; Noble, Robert J
2006-03-24
We report on the observation of rapid particle acceleration in numerical simulations of relativistic jet-plasma interactions and discuss the underlying mechanisms. The dynamics of a charge-neutral, narrow, electron-positron jet propagating through an unmagnetized electron-ion plasma was investigated using a three-dimensional, electromagnetic, particle-in-cell computer code. The interaction excited magnetic filamentation as well as electrostatic plasma instabilities. In some cases, the longitudinal electric fields generated inductively and electrostatically reached the cold plasma-wave-breaking limit, and the longitudinal momentum of about half the positrons increased by 50% with a maximum gain exceeding a factor of 2 during the simulation period. Particle acceleration via these mechanisms occurred when the criteria for Weibel instability were satisfied.
Analysis of the beam halo in negative ion sources by using 3D3V PIC code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp; Nishioka, S.; Goto, I.
The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with thosemore » for the 2D PIC simulation result.« less
Spin dynamics modeling in the AGS based on a stepwise ray-tracing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutheil, Yann
The AGS provides a polarized proton beam to RHIC. The beam is accelerated in the AGS from Gγ= 4.5 to Gγ = 45.5 and the polarization transmission is critical to the RHIC spin program. In the recent years, various systems were implemented to improve the AGS polarization transmission. These upgrades include the double partial snakes configuration and the tune jumps system. However, 100% polarization transmission through the AGS acceleration cycle is not yet reached. The current efficiency of the polarization transmission is estimated to be around 85% in typical running conditions. Understanding the sources of depolarization in the AGS ismore » critical to improve the AGS polarized proton performances. The complexity of beam and spin dynamics, which is in part due to the specialized Siberian snake magnets, drove a strong interest for original methods of simulations. For that, the Zgoubi code, capable of direct particle and spin tracking through field maps, was here used to model the AGS. A model of the AGS using the Zgoubi code was developed and interfaced with the current system through a simple command: the AgsFromSnapRampCmd. Interfacing with the machine control system allows for fast modelization using actual machine parameters. Those developments allowed the model to realistically reproduce the optics of the AGS along the acceleration ramp. Additional developments on the Zgoubi code, as well as on post-processing and pre-processing tools, granted long term multiturn beam tracking capabilities: the tracking of realistic beams along the complete AGS acceleration cycle. Beam multiturn tracking simulations in the AGS, using realistic beam and machine parameters, provided a unique insight into the mechanisms behind the evolution of the beam emittance and polarization during the acceleration cycle. Post-processing softwares were developed to allow the representation of the relevant quantities from the Zgoubi simulations data. The Zgoubi simulations proved particularly useful to better understand the polarization losses through horizontal intrinsic spin resonances The Zgoubi model as well as the tools developed were also used for some direct applications. For instance, some beam experiment simulations allowed an accurate estimation of the expected polarization gains from machine changes. In particular, the simulations that involved involved the tune jumps system provided an accurate estimation of polarization gains and the optimum settings that would improve the performance of the AGS.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.D.; Lau, E.L.; Turyshev, S.G.
Radio metric data from the Pioneer 10/11, Galileo, and Ulysses spacecraft indicate an apparent anomalous, constant, acceleration acting on the spacecraft with a magnitude {approximately}8.5{times}10{sup {minus}8} cm/s{sup 2} , directed towards the Sun. Two independent codes and physical strategies have been used to analyze the data. A number of potential causes have been ruled out. We discuss future kinematic tests and possible origins of the signal. {copyright} {ital 1998} {ital The American Physical Society}
Vrx: a verify-record system for radiotherapy.
Dickof, P; Morris, P; Getz, D
1984-01-01
A system of computer programs has been created to allow the entry of radiotherapy treatment details as defined by the physician, the verification of the machine parameters at every treatment, and the recording of the entire course of treatment. Various utility programs are available to simplify the use and maintenance of the system. The majority of the code is written in FORTRAN-77, the remainder in MACRO-11. The system has been implemented on a PDP 11/60 minicomputer for use with a Mevatron linear accelerator, the implementation required minor hardware changes to the accelerator.
Modeling laser-driven electron acceleration using WARP with Fourier decomposition
Lee, P.; Audet, T. L.; Lehe, R.; ...
2015-12-31
WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.
Modeling laser-driven electron acceleration using WARP with Fourier decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, P.; Audet, T. L.; Lehe, R.
WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2001-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2000-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
49 CFR 173.52 - Classification codes and compatibility groups of explosives.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Classification codes and compatibility groups of... Class 1 § 173.52 Classification codes and compatibility groups of explosives. (a) The classification..., consists of the division number followed by the compatibility group letter. Compatibility group letters are...
Simulations of High Current NuMI Magnetic Horn Striplines at FNAL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sipahi, Taylan; Biedron, Sandra; Hylen, James
2016-06-01
Both the NuMI (Neutrinos and the Main Injector) beam line, that has been providing intense neutrino beams for several Fermilab experiments (MINOS, MINERVA, NOVA), and the newly proposed LBNF (Long Baseline Neutrino Facility) beam line which plans to produce the highest power neutrino beam in the world for DUNE (the Deep Underground Neutrino Experiment) need pulsed magnetic horns to focus the mesons which decay to produce the neutrinos. The high-current horn and stripline design has been evolving as NuMI reconfigures for higher beam power and to meet the needs of the LBNF design. The CSU particle accelerator group has aidedmore » the neutrino physics experiments at Fermilab by producing EM simulations of magnetic horns and the required high-current striplines. In this paper, we present calculations, using the Poisson and ANSYS Maxwell 3D codes, of the EM interaction of the stripline plates of the NuMI horns at critical stress points. In addition, we give the electrical simulation results using the ANSYS Electric code. These results are being used to support the development of evolving horn stripline designs to handle increased electrical current and higher beam power for NuMI upgrades and for LBNF« less
Early Experiences Writing Performance Portable OpenMP 4 Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Hernandez, Oscar R
In this paper, we evaluate the recently available directives in OpenMP 4 to parallelize a computational kernel using both the traditional shared memory approach and the newer accelerator targeting capabilities. In addition, we explore various transformations that attempt to increase application performance portability, and examine the expressiveness and performance implications of using these approaches. For example, we want to understand if the target map directives in OpenMP 4 improve data locality when mapped to a shared memory system, as opposed to the traditional first touch policy approach in traditional OpenMP. To that end, we use recent Cray and Intel compilersmore » to measure the performance variations of a simple application kernel when executed on the OLCF s Titan supercomputer with NVIDIA GPUs and the Beacon system with Intel Xeon Phi accelerators attached. To better understand these trade-offs, we compare our results from traditional OpenMP shared memory implementations to the newer accelerator programming model when it is used to target both the CPU and an attached heterogeneous device. We believe the results and lessons learned as presented in this paper will be useful to the larger user community by providing guidelines that can assist programmers in the development of performance portable code.« less
The procedure execution manager and its application to Advanced Photon Source operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borland, M.
1997-06-01
The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less
NASA Astrophysics Data System (ADS)
Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina
Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Luo, Huilan; Chen, Yongsheng; Wang, Junhua
2010-01-01
Background: Atherosclerosis (AS) is caused mainly due to the increase in the serum lipid, thrombosis, and injuries of the endothelial cells. During aviation, the incremental load of positive acceleration that leads to dramatic stress reactions and hemodynamic changes may predispose pilots to functional disorders and even pathological changes of organs. However, much less is known on the correlation between aviation and AS pathogenesis. Methods and Results: A total of 32 rabbits were randomly divided into 4 groups with 8 rabbits in each group. The control group was given a high cholesterol diet but no acceleration exposure, whereas the other 3 experimental groups were treated with a high cholesterol diet and acceleration exposure for 4, 8, and 12 weeks, respectively. In each group, samples of celiac vein blood and the aorta were collected after the last exposure for the measurement of endogenous CO and HO-1 activities, as well as the levels of total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C). As compared with the control group, the endocardial CO content and the HO-1 activity in aortic endothelial cells were significantly elevated at the 4th, 8th, and 12th weekend, respectively (P < 0.05 or <0.01). And these measures tended upward as the exposure time was prolonged. Levels of TC and LDL-C in the experimental groups were significantly higher than those in the control group, presenting an upward tendency. Levels of TG were found significantly increased in the 8-week-exposure group, but significantly declined in the 12-week-exposure group (still higher than those in the control group). Levels of the HDL-C were increased in the 4-week-exposure group, declined in the 8-week-exposure group, and once more increased in the 12-week-exposure group, without significant differences with the control group. Conclusions: Positive acceleration exposure may lead to a significant increase of endogenous CO content and HO-1 activity and a metabolic disorder of serum lipid in high-cholesterol diet–fed rabbits, which implicates that the acceleration exposure might accelerate the progression of AS. PMID:20877690
Multi-phase SPH modelling of violent hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.
2015-11-01
This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.
2016-12-01
The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery
1998-01-01
In this effort, experimental exposure times for monoenergetic electrons and protons were determined to simulate the space radiation environment effects on Teflon components of the Hubble Space Telescope. Although the energy range of the available laboratory particle accelerators was limited, optimal exposure times for 50 keV, 220 keV, 350 keV, and 500 KeV electrons were calculated that produced a dose-versus-depth profile that approximated the full spectrum profile, and were realizable with existing equipment. For the case of proton exposure, the limited energy range of the laboratory accelerator restricted simulation of the dose to a depth of .5 mil. Also, while optimal exposure times were found for 200 keV, 500 keV and 700 keV protons that simulated the full spectrum dose-versus-depth profile to this depth, they were of such short duration that the existing laboratory could not be controlled to within the required accuracy. In addition to the obvious experimental issues, other areas exist in which the analytical work could be advanced. Improved computer codes for the dose prediction- along with improved methodology for data input and output- would accelerate and make more accurate the calculational aspects. This is particularly true in the case of proton fluxes where a paucity of available predictive software appears to exist. The dated nature of many of the existing Monte Carlo particle/radiation transport codes raises the issue as to whether existing codes are sufficient for this type of analysis. Other areas that would result in greater fidelity of laboratory exposure effects to the space environment is the use of a larger number of monoenergetic particle fluxes and improved optimization algorithms to determine the weighting values.
Three-dimensional particle simulation of heavy-ion fusion beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A.; Grote, D.P.; Haber, I.
1992-07-01
The beams in a heavy-ion-beam-driven inertial fusion (HIF) accelerator are collisionless, nonneutral plasmas, confined by applied magnetic and electric fields. These space-charge-dominated beams must be focused onto small (few mm) spots at the fusion target, and so preservation of a small emittance is crucial. The nonlinear beam self-fields can lead to emittance growth, and so a self-consistent field description is needed. To this end, a multidimensional particle simulation code, WARP (Friedman {ital et} {ital al}., Part. Accel. {bold 37}-{bold 38}, 131 (1992)), has been developed and is being used to study the transport of HIF beams. The code's three-dimensional (3-D)more » package combines features of an accelerator code and a particle-in-cell plasma simulation. Novel techniques allow it to follow beams through many accelerator elements over long distances and around bends. This paper first outlines the algorithms employed in WARP. A number of applications and corresponding results are then presented. These applications include studies of: beam drift-compression in a misaligned lattice of quadrupole focusing magnets; beam equilibria, and the approach to equilibrium; and the MBE-4 experiment ({ital AIP} {ital Conference} {ital Proceedings} 152 (AIP, New York, 1986), p. 145) recently concluded at Lawrence Berkeley Laboratory (LBL). Finally, 3-D simulations of bent-beam dynamics relevant to the planned Induction Linac Systems Experiments (ILSE) (Fessenden, Nucl. Instrum. Methods Plasma Res. A {bold 278}, 13 (1989)) at LBL are described. Axially cold beams are observed to exhibit little or no root-mean-square emittance growth at midpulse in transiting a (sharp) bend. Axially hot beams, in contrast, do exhibit some emittance growth.« less
Megaquakes, prograde surface waves and urban evolution
NASA Astrophysics Data System (ADS)
Lomnitz, C.; Castaños, H.
2013-05-01
Cities grow according to evolutionary principles. They move away from soft-ground conditions and avoid vulnerable types of structures. A megaquake generates prograde surface waves that produce unexpected damage in modern buildings. The examples (Figs. 1 and 2) were taken from the 1985 Mexico City and the 2010 Concepción, Chile megaquakes. About 400 structures built under supervision according to modern building codes were destroyed in the Mexican earthquake. All were sited on soft ground. A Rayleigh wave will cause surface particles to move as ellipses in a vertical plane. Building codes assume that this motion will be retrograde as on a homogeneous elastic halfspace, but soft soils are intermediate materials between a solid and a liquid. When Poisson's ratio tends to ν→0.5 the particle motion turns prograde as it would on a homogeneous fluid halfspace. Building codes assume that the tilt of the ground is not in phase with the acceleration but we show that structures on soft ground tilt into the direction of the horizontal ground acceleration. The combined effect of gravity and acceleration may destabilize a structure when it is in resonance with its eigenfrequency. Castaños, H. and C. Lomnitz, 2013. Charles Darwin and the 1835 Chile earthquake. Seismol. Res. Lett., 84, 19-23. Lomnitz, C., 1990. Mexico 1985: the case for gravity waves. Geophys. J. Int., 102, 569-572. Malischewsky, P.G. et al., 2008. The domain of existence of prograde Rayleigh-wave particle motion. Wave Motion 45, 556-564.; Figure 1 1985 Mexico megaquake--overturned 15-story apartment building in Mexico City ; Figure 2 2010 Chile megaquake Overturned 15-story R-C apartment building in Concepción
Goo, Young-Hwa; Son, Se-Hee; Yechoor, Vijay K; Paul, Antoni
2016-04-18
Foam cells are central to two major pathogenic processes in atherogenesis: cholesterol buildup in arteries and inflammation. The main underlying cause of cholesterol deposition in arteries is hypercholesterolemia. This study aimed to assess, in vivo, whether elevated plasma cholesterol also alters the inflammatory balance of foam cells. Apolipoprotein E-deficient mice were fed regular mouse chow through the study or were switched to a Western-type diet (WD) 2 or 14 weeks before death. Consecutive sections of the aortic sinus were used for lesion quantification or to isolate RNA from foam cells by laser-capture microdissection (LCM) for microarray and quantitative polymerase chain reaction analyses. WD feeding for 2 or 14 weeks significantly increased plasma cholesterol, but the size of atherosclerotic lesions increased only in the 14-week WD group. Expression of more genes was affected in foam cells of mice under prolonged hypercholesterolemia than in mice fed WD for 2 weeks. However, most transcripts coding for inflammatory mediators remained unchanged in both WD groups. Among the main players in inflammatory or immune responses, chemokine (C-X-C motif) ligand 13 was induced in foam cells of mice under WD for 2 weeks. The interferon-inducible GTPases, guanylate-binding proteins (GBP)3 and GBP6, were induced in the 14-week WD group, and other GBP family members were moderately increased. Our results indicate that acceleration of atherosclerosis by hypercholesterolemia is not linked to global changes in the inflammatory balance of foam cells. However, induction of GBPs uncovers a novel family of immune modulators with a potential role in atherogenesis. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Analysis of Movement Acceleration of Down's Syndrome Teenagers Playing Computer Games.
Carrogi-Vianna, Daniela; Lopes, Paulo Batista; Cymrot, Raquel; Hengles Almeida, Jefferson Jesus; Yazaki, Marcos Lomonaco; Blascovi-Assis, Silvana Maria
2017-12-01
This study aimed to evaluate movement acceleration characteristics in adolescents with Down syndrome (DS) and typical development (TD), while playing bowling and golf videogames on the Nintendo ® Wii™. The sample comprised 21 adolescents diagnosed with DS and 33 with TD of both sexes, between 10 and 14 years of age. The arm swing accelerations of the dominant upper limb were collected as measures during the bowling and the golf games. The first valid measurement, verified by the software readings, recorded at the start of each of the games, was used in the analysis. In the bowling game, the groups presented significant statistical differences, with the maximum (M) peaks of acceleration for the Male Control Group (MCG) (M = 70.37) and Female Control Group (FCG) (M = 70.51) when compared with Male Down Syndrome Group (MDSG) (M = 45.33) and Female Down Syndrome Group (FDSG) (M = 37.24). In the golf game the groups also presented significant statistical differences, the only difference being that the maximum peaks of acceleration for both male groups were superior compared with the female groups, MCG (M = 74.80) and FCG (M = 56.80), as well as in MDSG (M = 45.12) and in FDSG (M = 30.52). It was possible to use accelerometry to evaluate the movement acceleration characteristics of teenagers diagnosed with DS during virtual bowling and golf games played on the Nintendo Wii console.
Efficient Modeling of Laser-Plasma Accelerators with INF&RNO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Esarey, E.
2010-06-01
The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The codemore » has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less
Characteristics of Four SPE Classes According to Onset Timing and Proton Acceleration Patterns
NASA Astrophysics Data System (ADS)
Kim, Roksoon
2015-04-01
In our previous work (Kim et al., 2015), we suggested a new classification scheme, which categorizes the SPEs into four groups based on association with flare or CME inferred from onset timings as well as proton acceleration patterns using multienergy observations. In this study, we have tried to find whether there are any typical characteristics of associated events and acceleration sites in each group using 42 SPEs from 1997 to 2012. We find: (i) if the proton acceleration starts from a lower energy, a SPE has a higher chance to be a strong event (> 5000 pfu) even if the associated flare and CME are not so strong. The only difference between the SPEs associated with flare and CME is the location of the acceleration site. For the former, the sites are very low ( ~1 Rs) and close to the western limb, while the latter has a relatively higher (mean=6.05 Rs) and wider acceleration sites. (ii) When the proton acceleration starts from the higher energy, a SPE tends to be a relatively weak event (< 1000 pfu), in spite of its associated CME is relatively stronger than previous group. (iii) The SPEs categorized by the simultaneous proton acceleration in whole energy range within 10 minutes, tend to show the weakest proton flux (mean=327 pfu) in spite of strong related eruptions. Their acceleration heights are very close to the locations of type II radio bursts. Based on those results, we suggest that the different characteristics of the four groups are mainly due to the different mechanisms governing the acceleration pattern and interval, and different condition such as the acceleration location.
Design of an electromagnetic accelerator for turbulent hydrodynamic mix studies
NASA Astrophysics Data System (ADS)
Susoeff, A. R.; Hawke, R. S.; Morrison, J. J.; Dimonte, G.; Remington, B. A.
1993-12-01
An electromagnetic accelerator in the form of a linear electric motor (LEM) has been designed to achieve controlled acceleration profiles of a carriage containing hydrodynamically unstable fluids for the investigation of the development of turbulent mix. The Rayleigh-Taylor instability is investigated by accelerating two dissimilar density fluids using the LEM to achieve a wide variety of acceleration and deceleration profiles. The acceleration profiles are achieved by independent control of rail and augmentation currents. A variety of acceleration-time profiles are possible including: (1) constant, (2) impulsive and (3) shaped. The LEM and support structure are a robust design in order to withstand high loads with deflections and to mitigate operational vibration. Vibration of the carriage during acceleration could create artifacts in the data which would interfere with the intended study of the Rayleigh-Taylor instability. The design allows clear access for diagnostic techniques such as laser induced fluorescence radiography, shadowgraphs and particle imaging velocimetry. Electromagnetic modeling codes were used to optimize the rail and augmentation coil positions within the support structure framework. Results of contemporary studies for non-arcing sliding contact of solid armatures are used for the design of the driving armature and the dynamic electromagnetic braking system. A 0.6MJ electrolytic capacitor bank is used for energy storage to drive the LEM. This report will discuss a LEM design which will accelerate masses of up to 3kg to a maximum of about 3000g(sub o), where g(sub o) is accelerated due to gravity.
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...
2017-10-17
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less
Development and acceleration of unstructured mesh-based cfd solver
NASA Astrophysics Data System (ADS)
Emelyanov, V.; Karpenko, A.; Volkov, K.
2017-06-01
The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoup, R.W.; Long, F.; Martin, T.H.
Sandia has developed PBFA-Z, a 20-MA driver for z-pinch experiments by replacing the water lines, insulator stack. and MITLs on PBFA II with hardware of a new design. The PBFA-Z accelerator was designed to deliver 20 MA to a 15-mg z-pinch load in 100 ns. The accelerator was modeled using circuit codes to determine the time-dependent voltage and current waveforms at the input and output of the water lines, the insulator stack, and the MITLs. The design of the vacuum insulator stack was dictated by the drive voltage, the electric field stress and grading requirements, the water line and MITLmore » interface requirements, and the machine operations and maintenance requirements. The insulator stack consists of four separate modules, each of a different design because of different voltage drive and hardware interface requirements. The shape of the components in each module, i.e., grading rings, insulator rings, flux excluders, anode and cathode conductors, and the design of the water line and MITL interfaces, were optimized by using the electrostatic analysis codes, ELECTRO and JASON. The time-dependent performance of the insulator stacks was evaluated using IVORY, a 2-D PIC code. This paper will describe the insulator stack design, present the results of the ELECTRO and IVORY analyses, and show the results of the stack measurements.« less
GALARIO: a GPU accelerated library for analysing radio interferometer observations
NASA Astrophysics Data System (ADS)
Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo
2018-06-01
We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.
Probabilistic seismic hazard zonation for the Cuban building code update
NASA Astrophysics Data System (ADS)
Garcia, J.; Llanes-Buron, C.
2013-05-01
A probabilistic seismic hazard assessment has been performed in response to a revision and update of the Cuban building code (NC-46-99) for earthquake-resistant building construction. The hazard assessment have been done according to the standard probabilistic approach (Cornell, 1968) and importing the procedures adopted by other nations dealing with the problem of revising and updating theirs national building codes. Problems of earthquake catalogue treatment, attenuation of peak and spectral ground acceleration, as well as seismic source definition have been rigorously analyzed and a logic-tree approach was used to represent the inevitable uncertainties encountered through the whole seismic hazard estimation process. The seismic zonation proposed here, is formed by a map where it is reflected the behaviour of the spectral acceleration values for short (0.2 seconds) and large (1.0 seconds) periods on rock conditions with a 1642 -year return period, which being considered as maximum credible earthquake (ASCE 07-05). In addition, other three design levels are proposed (severe earthquake: with a 808 -year return period, ordinary earthquake: with a 475 -year return period and minimum earthquake: with a 225 -year return period). The seismic zonation proposed here fulfils the international standards (IBC-ICC) as well as the world tendencies in this thematic.
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.
2018-01-01
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.
CubiCal: Suite for fast radio interferometric calibration
NASA Astrophysics Data System (ADS)
Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.
2018-05-01
CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.
A portable platform for accelerated PIC codes and its application to GPUs using OpenACC
NASA Astrophysics Data System (ADS)
Hariri, F.; Tran, T. M.; Jocksch, A.; Lanti, E.; Progsch, J.; Messmer, P.; Brunner, S.; Gheller, C.; Villard, L.
2016-10-01
We present a portable platform, called PIC_ENGINE, for accelerating Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as Graphic Processing Units (GPUs). The aim of this development is efficient simulations on future exascale systems by allowing different parallelization strategies depending on the application problem and the specific architecture. To this end, this platform contains the basic steps of the PIC algorithm and has been designed as a test bed for different algorithmic options and data structures. Among the architectures that this engine can explore, particular attention is given here to systems equipped with GPUs. The study demonstrates that our portable PIC implementation based on the OpenACC programming model can achieve performance closely matching theoretical predictions. Using the Cray XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the one on an Intel Sandy bridge 8-core CPU by a factor of 3.4.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
3D RNA and functional interactions from evolutionary couplings
Weinreb, Caleb; Riesselman, Adam; Ingraham, John B.; Gross, Torsten; Sander, Chris; Marks, Debora S.
2016-01-01
Summary Non-coding RNAs are ubiquitous, but the discovery of new RNA gene sequences far outpaces research on their structure and functional interactions. We mine the evolutionary sequence record to derive precise information about function and structure of RNAs and RNA-protein complexes. As in protein structure prediction, we use maximum entropy global probability models of sequence co-variation to infer evolutionarily constrained nucleotide-nucleotide interactions within RNA molecules, and nucleotide-amino acid interactions in RNA-protein complexes. The predicted contacts allow all-atom blinded 3D structure prediction at good accuracy for several known RNA structures and RNA-protein complexes. For unknown structures, we predict contacts in 160 non-coding RNA families. Beyond 3D structure prediction, evolutionary couplings help identify important functional interactions, e.g., at switch points in riboswitches and at a complex nucleation site in HIV. Aided by accelerating sequence accumulation, evolutionary coupling analysis can accelerate the discovery of functional interactions and 3D structures involving RNA. PMID:27087444
Automated optical inspection and image analysis of superconducting radio-frequency cavities
NASA Astrophysics Data System (ADS)
Wenskat, M.
2017-05-01
The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97 % and the positive predictive value (PPV) 99 % within the resolution of 15.63 μm. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ = -0.93 with a significance of 6 σ between an obtained surface variable and the maximal accelerating field was found.
Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Yamada, Masako
The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less
Accelerator test of the coded aperture mask technique for gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.
1982-01-01
A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.
NASA Technical Reports Server (NTRS)
Gurman, Joseph (Technical Monitor); Habbal, Shadia Rifai
2004-01-01
Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored (1) the role of proton temperature anisotropy in the expansion of the solar wind, (2) the role of plasma parameters at the coronal base in the formation of high speed solar wind streams at mid-latitudes, and (3) the heating of coronal loops.
Computing NLTE Opacities -- Node Level Parallel Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Daniel
Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.
2014-09-30
portability is difficult to achieve on future supercomputers that use various type of accelerators (GPUs, Xeon - Phi , and SIMD etc). All of these...bottlenecks of NUMA. For example, in the CG code the state vector was originally stored as q(1 : Nvar ,1 : Npoin) where Nvar are the number of...a Global Grid Point (GGP) storage. On the other hand, in the DG code the state vector is typically stored as q(1 : Nvar ,1 : Npts,1 : Nelem) where
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyamoto, K.; Okuda, S.; Hatayama, A.
2013-01-14
To understand the physical mechanism of the beam halo formation in negative ion beams, a two-dimensional particle-in-cell code for simulating the trajectories of negative ions created via surface production has been developed. The simulation code reproduces a beam halo observed in an actual negative ion beam. The negative ions extracted from the periphery of the plasma meniscus (an electro-static lens in a source plasma) are over-focused in the extractor due to large curvature of the meniscus.
SU-E-T-103: Development and Implementation of Web Based Quality Control Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Studinski, R; Taylor, R; Angers, C
Purpose: Historically many radiation medicine programs have maintained their Quality Control (QC) test results in paper records or Microsoft Excel worksheets. Both these approaches represent significant logistical challenges, and are not predisposed to data review and approval. It has been our group's aim to develop and implement web based software designed not just to record and store QC data in a centralized database, but to provide scheduling and data review tools to help manage a radiation therapy clinics Equipment Quality control program. Methods: The software was written in the Python programming language using the Django web framework. In order tomore » promote collaboration and validation from other centres the code was made open source and is freely available to the public via an online source code repository. The code was written to provide a common user interface for data entry, formalize the review and approval process, and offer automated data trending and process control analysis of test results. Results: As of February 2014, our installation of QAtrack+ has 180 tests defined in its database and has collected ∼22 000 test results, all of which have been reviewed and approved by a physicist via QATrack+'s review tools. These results include records for quality control of Elekta accelerators, CT simulators, our brachytherapy programme, TomoTherapy and Cyberknife units. Currently at least 5 other centres are known to be running QAtrack+ clinically, forming the start of an international user community. Conclusion: QAtrack+ has proven to be an effective tool for collecting radiation therapy QC data, allowing for rapid review and trending of data for a wide variety of treatment units. As free and open source software, all source code, documentation and a bug tracker are available to the public at https://bitbucket.org/tohccmedphys/qatrackplus/.« less
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Hartmann, D. H.; Hardee, P.; Hededal, C.; Mizunno, Y.; Fishman, G. J.
2006-01-01
We performed numerical simulations of particle acceleration, magnetic field generation, and emission from shocks in order to understand the observed emission from relativistic jets and supernova remnants. The investigation involves the study of collisionless shocks, where the Weibel instability is responsible for particle acceleration as well as magnetic field generation. A 3-D relativistic particle-in-cell (RPIC) code has been used to investigate the shock processes in electron-positron plasmas. The evolution of theWeibe1 instability and its associated magnetic field generation and particle acceleration are studied with two different jet velocities (0 = 2,5 - slow, fast) corresponding to either outflows in supernova remnants or relativistic jets, such as those found in AGNs and microquasars. Slow jets have intrinsically different structures in both the generated magnetic fields and the accelerated particle spectrum. In particular, the jet head has a very weak magnetic field and the ambient electrons are strongly accelerated and dragged by the jet particles. The simulation results exhibit jitter radiation from inhomogeneous magnetic fields, generated by the Weibel instability, which has different spectral properties than standard synchrotron emission in a homogeneous magnetic field.
Particle acceleration magnetic field generation, and emission in Relativistic pair jets
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C.; Kouveliotou, C.; Fishman, G. J.
2005-01-01
Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) are responsible for particle acceleration in relativistic pair jets. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic pair jet propagating through a pair plasma. Simulations show that the Weibel instability created in the collisionless shock accelerates particles perpendicular and parallel to the jet propagation direction. Simulation results show that this instability generates and amplifies highly nonuniform, small-scale magnetic fields, which contribute to the electron's transverse deflection behind the jet head. The "jitter' I radiation from deflected electrons can have different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants. The growth rate of the Weibel instability and the resulting particle acceleration depend on the magnetic field strength and orientation, and on the initial particle distribution function. In this presentation we explore some of the dependencies of the Weibel instability and resulting particle acceleration on the magnetic field strength and orientation, and the particle distribution function.
Carinou, Eleutheria; Stamatelatos, Ion Evangelos; Kamenopoulou, Vassiliki; Georgolopoulou, Paraskevi; Sandilos, Panayotis
The development of a computational model for the treatment head of a medical electron accelerator (Elekta/Philips SL-18) by the Monte Carlo code mcnp-4C2 is discussed. The model includes the major components of the accelerator head and a pmma phantom representing the patient body. Calculations were performed for a 14 MeV electron beam impinging on the accelerator target and a 10 cmx10 cm beam area at the isocentre. The model was used in order to predict the neutron ambient dose equivalent at the isocentre level and moreover the neutron absorbed dose distribution within the phantom. Calculations were validated against experimental measurements performed by gold foil activation detectors. The results of this study indicated that the equivalent dose at tissues or organs adjacent to the treatment field due to photoneutrons could be up to 10% of the total peripheral dose, for the specific accelerator characteristics examined. Therefore, photoneutrons should be taken into account when accurate dose calculations are required to sensitive tissues that are adjacent to the therapeutic X-ray beam. The method described can be extended to other accelerators and collimation configurations as well, upon specification of treatment head component dimensions, composition and nominal accelerating potential.
Post-acceleration of laser driven protons with a compact high field linac
NASA Astrophysics Data System (ADS)
Sinigardi, Stefano; Londrillo, Pasquale; Rossi, Francesco; Turchetti, Giorgio; Bolton, Paul R.
2013-05-01
We present a start-to-end 3D numerical simulation of a hybrid scheme for the acceleration of protons. The scheme is based on a first stage laser acceleration, followed by a transport line with a solenoid or a multiplet of quadrupoles, and then a post-acceleration section in a compact linac. Our simulations show that from a laser accelerated proton bunch with energy selection at ~ 30MeV, it is possible to obtain a high quality monochromatic beam of 60MeV with intensity at the threshold of interest for medical use. In the present day experiments using solid targets, the TNSA mechanism describes accelerated bunches with an exponential energy spectrum up to a cut-off value typically below ~ 60MeV and wide angular distribution. At the cut-off energy, the number of protons to be collimated and post-accelerated in a hybrid scheme are still too low. We investigate laser-plasma acceleration to improve the quality and number of the injected protons at ~ 30MeV in order to assure efficient post-acceleration in the hybrid scheme. The results are obtained with 3D PIC simulations using a code where optical acceleration with over-dense targets, transport and post-acceleration in a linac can all be investigated in an integrated framework. The high intensity experiments at Nara are taken as a reference benchmarks for our virtual laboratory. If experimentally confirmed, a hybrid scheme could be the core of a medium sized infrastructure for medical research, capable of producing protons for therapy and x-rays for diagnosis, which complements the development of all optical systems.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
Astronaut Risk Levels During Crew Module (CM) Land Landing
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Carney, Kelly S.; Littell, Justin
2007-01-01
The NASA Engineering Safety Center (NESC) is investigating the merits of water and land landings for the crew exploration vehicle (CEV). The merits of these two options are being studied in terms of cost and risk to the astronauts, vehicle, support personnel, and general public. The objective of the present work is to determine the astronaut dynamic response index (DRI), which measures injury risks. Risks are determined for a range of vertical and horizontal landing velocities. A structural model of the crew module (CM) is developed and computational simulations are performed using a transient dynamic simulation analysis code (LS-DYNA) to determine acceleration profiles. Landing acceleration profiles are input in a human factors model that determines astronaut risk levels. Details of the modeling approach, the resulting accelerations, and astronaut risk levels are provided.
Charged particle tracking through electrostatic wire meshes using the finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk
Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed.more » The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.« less
NESSY: NLTE spectral synthesis code for solar and stellar atmospheres
NASA Astrophysics Data System (ADS)
Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.
2017-07-01
Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Li, Z.; Ng, C.
The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less
GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac
2017-03-01
The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.
Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Pair Jets
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C.; Mizuno, Y.
2005-01-01
Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created by relativistic pair jets are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet propagating through an ambient plasma with and without initial magnetic fields. The growth rates of the Weibel instability depends on the distribution of pair jets. Simulations show that the Weibel instability created in the collisionless shock accelerates particles perpendicular and parallel to the jet propagation direction. The simulation results show that this instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields, which contribute to the electron's transverse deflection behind the jet head. The "jitter" radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.
Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Pair Jets
NASA Technical Reports Server (NTRS)
Nishikawa, K. I.; Hardee, P.; Hededal, C. B.; Richardson, G.; Sol, H.; Preece, R.; Fishman, G. J.
2004-01-01
Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., Buneman, Weibel and other two-stream instabilities) created in collisionless shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating into an ambient plasma. We find that the growth times depend on the Lorenz factors of jets. The jets with larger Lorenz factors grow slower. Simulations show that the Weibel instability created in the collisionless shock front accelerates jet and ambient particles both perpendicular and parallel to the jet propagation direction. The small scale magnetic field structure generated by the Weibel instability is appropriate to the generation of "jitter" radiation from deflected electrons (positrons) as opposed to synchrotron radiation. The jitter radiation resulting from small scale magnetic field structures may be important for understanding the complex time structure and spectral evolution observed in gamma-ray bursts or other astrophysical sources containing relativistic jets and relativistic collisionless shocks.
NASA Astrophysics Data System (ADS)
Bana, O.; Mintarto, E.; Kusnanik, N. W.
2018-01-01
The purpose of this research is to analyze the following factors: (1) how far the effect of exercise acceleration sprint on the speed and agility (2) how much influence the zig-zag drill combination to the speed and agility (3) and is there any difference between the effects of exercise acceleration sprint and practice zig-zag drill combination of the speed and agility. This research is quantitative with quasi-experimental approach. The design of this study is matching only design.This study was conducted on 33 male students who take part in extracurricular and divided into 3 groups with 11 students in each group. Group 1 was given training of acceleration sprint, group 2 was given zig-zag training combination drills of conventional and exercises for group 3, for 8 weeks. The data collection was using sprint 30 meter to test the speed and agility t-test to test agility. Data were analyzed using t-test and analysis of variance. The conclusion of the research is (1) there is a significant effect of exercise acceleration sprint for the speed and agility, (2) there is a significant influence combination zig-zag drills, on speed and agility (3) and exercise acceleration sprint have more effect on the speed and agility.
The accelerated residency program: the Marshall University family practice 9-year experience.
Petrany, Stephen M; Crespo, Richard
2002-10-01
In 1989, the American Board of Family Practice (ABFP) approved the first of 12 accelerated residency programs in family practice. These experimental programs provide a 1-year experience for select medical students that combines the requirements of the fourth year of medical school with those of the first year of residency, reducing the total training time by 1 year. This paper reports on the achievements and limitations of the Marshall University accelerated residency program over a 9-year period that began in 1992. Several parameters have been monitored since the inception of the accelerated program and provide the basis for comparison of accelerated and traditional residents. These include initial resident characteristics, performance outcomes, and practice choices. A total of 16 students were accepted into the accelerated track from 1992 through 1998. During the same time period, 44 residents entered the traditional residency program. Accelerated resident tended to be older and had more career experience than their traditional counterparts. As a group, the accelerated residents scored an average of 30 points higher on the final in-training exams provided by the ABFP. All residents in both groups remained at Marshall to complete the full residency training experience, and all those who have taken the ABFP certifying exam have passed. Accelerated residents were more likely to practice in West Virginia, consistent with one of the initial goals for the program. In addition, accelerated residents were more likely to be elected chief resident and choose an academic career than those in the traditional group. Both groups opted for small town or rural practice equally. The Marshall University family practice 9-year experience with the accelerated residency track demonstrates that for carefully selected candidates, the program can provide an overall shortened path to board certification and attract students who excel academically and have high leadership potential. Reports from other accelerated programs are needed to fully assess the outcomes of this experiment in postgraduate medical education.
A test of the IAEA code of practice for absorbed dose determination in photon and electron beams
NASA Astrophysics Data System (ADS)
Leitner, Arnold; Tiefenboeck, Wilhelm; Witzani, Josef; Strachotinsky, Christian
1990-12-01
The IAEA (International Atomic Energy Agency) code of practice TRS 277 gives recommendations for absorbed dose determination in high energy photon and electron beams based on the use of ionization chambers calibrated in terms of exposure of air kerma. The scope of the work was to test the code for cobalt 60 gamma radiation and for several radiation qualities at four different types of electron accelerators and to compare the ionization chamber dosimetry with ferrous sulphate dosimetry. The results show agreement between the two methods within about one per cent for all the investigated qualities. In addition the response of the TLD capsules of the IAEA/WHO TL dosimetry service was determined.
NASA Technical Reports Server (NTRS)
Habbal, Shadia Rifai
2005-01-01
Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored: (1) the role of proton temperature anisotropy in the expansion of the solar (2) the role of plasma parameters at the coronal base in the formation of high (3) a three-fluid model of the slow solar wind (4) the heating of coronal loops (5) a newly developed hybrid code for the study of ion cyclotron resonance in wind, speed solar wind streams at mid-latitudes, the solar wind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Katherine J; Johnson, Seth R; Prokopenko, Andrey V
'ForTrilinos' is related to The Trilinos Project, which contains a large and growing collection of solver capabilities that can utilize next-generation platforms, in particular scalable multicore, manycore, accelerator and heterogeneous systems. Trilinos is primarily written in C++, including its user interfaces. While C++ is advantageous for gaining access to the latest programming environments, it limits Trilinos usage via Fortran. Sever ad hoc translation interfaces exist to enable Fortran usage of Trilinos, but none of these interfaces is general-purpose or written for reusable and sustainable external use. 'ForTrilinos' provides a seamless pathway for large and complex Fortran-based codes to access Trilinosmore » without C/C++ interface code. This access includes Fortran versions of Kokkos abstractions for code execution and data management.« less
Modeling and Simulation of Explosively Driven Electromechanical Devices
NASA Astrophysics Data System (ADS)
Demmie, Paul N.
2002-07-01
Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
Comparison of Stopping Power and Range Databases for Radiation Transport Study
NASA Technical Reports Server (NTRS)
Tai, H.; Bichsel, Hans; Wilson, John W.; Shinn, Judy L.; Cucinotta, Francis A.; Badavi, Francis F.
1997-01-01
The codes used to calculate stopping power and range for the space radiation shielding program at the Langley Research Center are based on the work of Ziegler but with modifications. As more experience is gained from experiments at heavy ion accelerators, prudence dictates a reevaluation of the current databases. Numerical values of stopping power and range calculated from four different codes currently in use are presented for selected ions and materials in the energy domain suitable for space radiation transport. This study of radiation transport has found that for most collision systems and for intermediate particle energies, agreement is less than 1 percent, in general, among all the codes. However, greater discrepancies are seen for heavy systems, especially at low particle energies.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Rozendaal, H. L.
1977-01-01
Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.
[Class III surgical patients facilitated by accelerated osteogenic orthodontic treatment].
Wu, Jia-qi; Xu, Li; Liang, Cheng; Zou, Wei; Bai, Yun-yang; Jiang, Jiu-hui
2013-10-01
To evaluate the treatment time and the anterior and posterior teeth movement pattern as closing extraction space for the Class III surgical patients facilitated by accelerated osteogenic orthodontic treatment. There were 10 skeletal Class III patients in accelerated osteogenic orthodontic group (AOO) and 10 patients in control group. Upper first premolars were extracted in all patients. After leveling and alignment (T2), corticotomy was performed in the area of maxillary anterior teeth to accelerate space closing.Study models of upper dentition were taken before orthodontic treatment (T1) and after space closing (T3). All the casts were laser scanned, and the distances of the movement of incisors and molars were digitally measured. The distances of tooth movement in two groups were recorded and analyzed. The alignment time between two groups was not statistically significant. The treatment time in AOO group from T2 to T3 was less than that in the control group (less than 9.1 ± 4.1 months). The treatment time in AOO group from T1 to T3 was less than that in the control group (less than 6.3 ± 4.8 months), and the differences were significant (P < 0.01). Average distances of upper incisor movement (D1) in AOO group and control group were (2.89 ± 1.48) and (3.10 ± 0.95) mm, respectively. Average distances of upper first molar movement (D2) in AOO group and control group were (2.17 ± 1.13) and (2.45 ± 1.04) mm, respectively.No statistically significant difference was found between the two groups (P > 0.05). Accelerated osteogenic orthodontic treatment could accelerate space closing in Class III surgical patients and shorten preoperative orthodontic time. There were no influence on the movement pattern of anterior and posterior teeth during pre-surgical orthodontic treatment.
Comparison of two methods of MMPI-2 profile classification.
Munley, P H; Germain, J M
2000-10-01
The present study investigated the extent of agreement of the highest scale method and the best-fit method in matching MMPI-2 profiles to database code-type profiles and considered profile characteristics that may relate to agreement or disagreement of code-type matches by these two methods. A sample of 519 MMPI-2 profiles that had been classified into database profile code types by these two methods was studied. Resulting code-type matches were classified into three groups: identical (30%), similar (39%), and different (31%), and the profile characteristics of profile elevation, dispersion, and profile code-type definition were studied. Profile code-type definition was significantly different across the three groups with identical and similar match profile groups showing greater profile code-type definition and the different group consisting of profiles that were less well-defined.
The weight hierarchies and chain condition of a class of codes from varieties over finite fields
NASA Technical Reports Server (NTRS)
Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.
Ensemble coding of face identity is not independent of the coding of individual identity.
Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina
2018-06-01
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Düzgün, Irem; Baltacı, Gül; Atay, O Ahmet
2011-01-01
In this study, we sought to compare the effects of the slow and accelerated protocols on pain and functional activity level after arthroscopic rotator cuff repair. The study included 29 patients (3 men, 26 women) who underwent arthroscopic repair of stage 2 and 3 rotator cuff tears. Patients were randomized in two groups: the accelerated protocol group (n=13) and slow protocol group (n=16). Patients in the accelerated protocol group participated in a preoperative rehabilitation program for 4-6 weeks. Patients were evaluated preoperatively and for 24 weeks postoperatively. Pain was assessed by visual analog scale, and functional activity level was assessed by The Disabilities of The Arm Shoulder and Hand (DASH) questionnaire. The active range of motion was initiated at week 3 after surgery for the accelerated rehabilitation protocol and at week 6 for the slow protocol. The rehabilitation program was completed by the 8th week with the accelerated protocol and by the 22nd week with the slow protocol. There was no significant difference between the slow and accelerated protocols with regard to pain at rest (p>0.05). However, the accelerated protocol was associated with less pain during activity at weeks 5 and 16, and with less pain at night during week 5 (p<0.05). The accelerated protocol was superior to the slow protocol in terms of functional activity level, as determined by DASH at weeks 8, 12, and 16 after surgery (p<0.05). The accelerated protocol is recommended to physical therapists during rehabilitation after arthroscopic rotator cuff repair to prevent the negative effects of immobilization and to support rapid reintegration to daily living activities.
NASA Astrophysics Data System (ADS)
Azmi, K.; Kusnanik, N. W.
2018-01-01
This study aimed to analyze the effect of speed, agility and quickness training program to increase in speed, agility and acceleration. This study was conducted at 26 soccer players and divided into 2 groups with 13 players each group. Group 1 was given SAQ training program, and Group 2 conventional training program for 8 weeks. This study used a quantitative approach with quasi-experimental method. The design of this study used a matching-only design. Data was collected by testing 30-meter sprint (speed), agility t-test (agility), and run 10 meters (acceleration) during the pretest and posttest. Furthermore, the data was analyzed using paired sample t-test and independent t-test. The results showed: that there was a significant effect of speed, agility and quickness training program in improving in speed, agility and acceleration. In summary, it can be concluded that the speed, agility and quickness training program can improve the speed, agility and acceleration of the soccer players.
Supplementing Accelerated Reading with Classwide Interdependent Group-Oriented Contingencies
ERIC Educational Resources Information Center
Pappas, Danielle N.; Skinner, Christopher H.; Skinner, Amy L.
2010-01-01
An across-groups (classrooms), multiple-baseline design was used to investigate the effects of an interdependent group-oriented contingency on the Accelerated Reader (AR) performance of fourth-grade students. A total of 32 students in three classes participated. Before the study began, an independent group-oriented reward program was being applied…
Three-dimensional simulation of triode-type MIG for 1 MW, 120 GHz gyrotron for ECRH applications
NASA Astrophysics Data System (ADS)
Singh, Udaybir; Kumar, Nitin; Kumar, Narendra; Kumar, Anil; Sinha, A. K.
2012-01-01
In this paper, the three-dimensional simulation of triode-type magnetron injection gun (MIG) for 120 GHz, 1 MW gyrotron is presented. The operating voltages of the modulating anode and the accelerating anode are 57 kV and 80 kV respectively. The high order TE 22,6 mode is selected as the operating mode and the electron beam is launched at the first radial maxima for the fundamental beam-mode operation. The initial design is obtained by using the in-house developed code MIGSYN. The numerical simulation is performed by using the commercially available code CST-Particle Studio (PS). The simulated results of MIG obtained by using CST-PS are validated with other simulation codes EGUN and TRAK, respectively. The results on the design output parameters obtained by using these three codes are found to be in close agreement.
Design of the central region in the Gustaf Werner cyclotron at the Uppsala university
NASA Astrophysics Data System (ADS)
Toprek, Dragan; Reistad, Dag; Lundstrom, Bengt; Wessman, Dan
2002-07-01
This paper describes the design of the central region in the Gustaf Werner cyclotron for h=1, 2 and 3 modes of acceleration. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.
Light ion beam fusion research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yonas, G.
1983-01-01
Data has been collected on PBFA I using three related diode types: (1) the Ampfion diode, (2) the applied field diode, and (3) the pinch reflex diode. Concurrent with these PBFA I experiments, complementary experiments were carried out on Proto I at Sandia, as well as the Lion accelerator at Cornell University, and the Gamble II accelerator at the Naval Research Laboratory. In addition to these experiments, improved electromagnetic particle-in-cell codes and analytical treatments were brought to bear on improving our understanding of diode phenomena. A brief review of some of the results is given.
Evaluation of new techniques for the calculation of internal recirculating flows
NASA Technical Reports Server (NTRS)
Van Doormaal, J. P.; Turan, A.; Raithby, G. D.
1987-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.
Zebra: An advanced PWR lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, L.; Wu, H.; Zheng, Y.
2012-07-01
This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less
Nimptsch, Ulrike
2016-06-01
To investigate changes in comorbidity coding after the introduction of diagnosis related groups (DRGs) based prospective payment and whether trends differ regarding specific comorbidities. Nationwide administrative data (DRG statistics) from German acute care hospitals from 2005 to 2012. Observational study to analyze trends in comorbidity coding in patients hospitalized for common primary diseases and the effects on comorbidity-related risk of in-hospital death. Comorbidity coding was operationalized by Elixhauser diagnosis groups. The analyses focused on adult patients hospitalized for the primary diseases of heart failure, stroke, and pneumonia, as well as hip fracture. When focusing the total frequency of diagnosis groups per record, an increase in depth of coding was observed. Between-hospital variations in depth of coding were present throughout the observation period. Specific comorbidity increases were observed in 15 of the 31 diagnosis groups, and decreases in comorbidity were observed for 11 groups. In patients hospitalized for heart failure, shifts of comorbidity-related risk of in-hospital death occurred in nine diagnosis groups, in which eight groups were directed toward the null. Comorbidity-adjusted outcomes in longitudinal administrative data analyses may be biased by nonconstant risk over time, changes in completeness of coding, and between-hospital variations in coding. Accounting for such issues is important when the respective observation period coincides with changes in the reimbursement system or other conditions that are likely to alter clinical coding practice. © Health Research and Educational Trust.
Collisional disruptions of rotating targets
NASA Astrophysics Data System (ADS)
Ševeček, Pavel; Broz, Miroslav
2017-10-01
Collisions are key processes in the evolution of the Main Asteroid Belt and impact events - i.e. target fragmentation and gravitational reaccumulation - are commonly studied by numerical simulations, namely by SPH and N-body methods. In our work, we extend the previous studies by assuming rotating targets and we study the dependence of resulting size-distributions on the pre-impact rotation of the target. To obtain stable initial conditions, it is also necessary to include the self-gravity already in the fragmentation phase which was previously neglected.To tackle this problem, we developed an SPH code, accelerated by SSE/AVX instruction sets and parallelized. The code solves the standard set of hydrodynamic equations, using the Tillotson equation of state, von Mises criterion for plastic yielding and scalar Grady-Kipp model for fragmentation. We further modified the velocity gradient by a correction tensor (Schäfer et al. 2007) to ensure a first-order conservation of the total angular momentum. As the intact target is a spherical body, its gravity can be approximated by a potential of a homogeneous sphere, making it easy to set up initial conditions. This is however infeasible for later stages of the disruption; to this point, we included the Barnes-Hut algorithm to compute the gravitational accelerations, using a multipole expansion of distant particles up to hexadecapole order.We tested the code carefully, comparing the results to our previous computations obtained with the SPH5 code (Benz and Asphaug 1994). Finally, we ran a set of simulations and we discuss the difference between the synthetic families created by rotating and static targets.
Decoding the complex genetic causes of heart diseases using systems biology.
Djordjevic, Djordje; Deshpande, Vinita; Szczesnik, Tomasz; Yang, Andrian; Humphreys, David T; Giannoulatou, Eleni; Ho, Joshua W K
2015-03-01
The pace of disease gene discovery is still much slower than expected, even with the use of cost-effective DNA sequencing and genotyping technologies. It is increasingly clear that many inherited heart diseases have a more complex polygenic aetiology than previously thought. Understanding the role of gene-gene interactions, epigenetics, and non-coding regulatory regions is becoming increasingly critical in predicting the functional consequences of genetic mutations identified by genome-wide association studies and whole-genome or exome sequencing. A systems biology approach is now being widely employed to systematically discover genes that are involved in heart diseases in humans or relevant animal models through bioinformatics. The overarching premise is that the integration of high-quality causal gene regulatory networks (GRNs), genomics, epigenomics, transcriptomics and other genome-wide data will greatly accelerate the discovery of the complex genetic causes of congenital and complex heart diseases. This review summarises state-of-the-art genomic and bioinformatics techniques that are used in accelerating the pace of disease gene discovery in heart diseases. Accompanying this review, we provide an interactive web-resource for systems biology analysis of mammalian heart development and diseases, CardiacCode ( http://CardiacCode.victorchang.edu.au/ ). CardiacCode features a dataset of over 700 pieces of manually curated genetic or molecular perturbation data, which enables the inference of a cardiac-specific GRN of 280 regulatory relationships between 33 regulator genes and 129 target genes. We believe this growing resource will fill an urgent unmet need to fully realise the true potential of predictive and personalised genomic medicine in tackling human heart disease.
Focus Group Research on the Implications of Adopting the Unified English Braille Code
ERIC Educational Resources Information Center
Wetzel, Robin; Knowlton, Marie
2006-01-01
Five focus groups explored concerns about adopting the Unified English Braille Code. The consensus was that while the proposed changes to the literary braille code would be minor, those to the mathematics braille code would be much more extensive. The participants emphasized that "any code that reduces the number of individuals who can access…
Improving coding accuracy in an academic practice.
Nguyen, Dana; O'Mara, Heather; Powell, Robert
2017-01-01
Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.
Grid Standards and Codes | Grid Modernization | NREL
simulations that take advantage of advanced concepts such as hardware-in-the-loop testing. Such methods of methods and solutions. Projects Accelerating Systems Integration Standards Sharp increases in goal of this project is to develop streamlined and accurate methods for New York utilities to determine
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
Males, J J; Viswanathan, D
2018-01-01
PurposeTo compare the long-term outcomes of accelerated corneal collagen crosslinking (CXL) to conventional CXL for progressive keratoconus.Patients and methodsComparative clinical study of consecutive progressive keratoconic eyes that underwent either accelerated CXL (9 mW/cm 2 ultraviolet A (UVA) light irradiance for 10 min) or conventional CXL (3 mW/cm 2 UVA light irradiance for 30 min). Eyes with minimum 12 months' follow-up were included. Post-procedure changes in keratometry readings (Flat meridian: K1; steep meridian: K2), central corneal thickness (CCT), best spectacle-corrected visual acuity (BSCVA), and manifest refraction spherical equivalent (MRSE) were analysed.ResultsA total of 42 eyes were included. In all, 21 eyes had accelerated CXL (20.5±5.5 months' follow-up) and 21 eyes had conventional CXL group (20.2±5.6 months' follow-up). In the accelerated CXL group, a significant reduction in K2 (P=0.02), however no significant change in K1 (P=0.35) and CCT (P=0.62) was noted. In the conventional CXL group, a significant reduction was seen in K1 (P=0.01) and K2 (P=0.04), but not in CCT (P=0.95). Although both groups exhibited significant reductions in K2 readings, no noteworthy differences were noted between them (P=0.36). Improvements in BSCVA (accelerated CXL; P=0.22 and conventional CXL; P=0.20) and MRSE (accelerated CXL; P=0.97 and conventional CXL; P=0.54) were noted, however were not significant in either group.ConclusionAccelerated and conventional CXL appear to be effective procedures for stabilising progressive keratoconus in the long-term.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
Diagnostics of Particles emitted from a Laser generated Plasma: Experimental Data and Simulations
NASA Astrophysics Data System (ADS)
Costa, Giuseppe; Torrisi, Lorenzo
2018-01-01
The charge particle emission form laser-generated plasma was studied experimentally and theoretically using the COMSOL simulation code. The particle acceleration was investigated using two lasers at two different regimes. A Nd:YAG laser, with 3 ns pulse duration and 1010 W/cm2 intensity, when focused on solid target produces a non-equilibrium plasma with average temperature of about 30-50 eV. An Iodine laser with 300 ps pulse duration and 1016 W/cm2 intensity produces plasmas with average temperatures of the order of tens keV. In both cases charge separation occurs and ions and electrons are accelerated at energies of the order of 200 eV and 1 MeV per charge state in the two cases, respectively. The simulation program permits to plot the charge particle trajectories from plasma source in vacuum indicating how they can be deflected by magnetic and electrical fields. The simulation code can be employed to realize suitable permanent magnets and solenoids to deflect ions toward a secondary target or detectors, to focalize ions and electrons, to realize electron traps able to provide significant ion acceleration and to realize efficient spectrometers. In particular it was applied to the study two Thomson parabola spectrometers able to detect ions at low and at high laser intensities. The comparisons between measurements and simulation is presented and discussed.
Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S
2012-03-01
Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.
Beam dynamics study of a 30 MeV electron linear accelerator to drive a neutron source
NASA Astrophysics Data System (ADS)
Kumar, Sandeep; Yang, Haeryong; Kang, Heung-Sik
2014-02-01
An experimental neutron facility based on 32 MeV/18.47 kW electron linac has been studied by means of PARMELA simulation code. Beam dynamics study for a traveling wave constant gradient electron accelerator is carried out to reach the preferential operation parameters (E = 30 MeV, P = 18 kW, dE/E < 12.47% for 99% particles). The whole linac comprises mainly E-gun, pre-buncher, buncher, and 2 accelerating columns. A disk-loaded, on-axis-coupled, 2π/3-mode type accelerating rf cavity is considered for this linac. After numerous optimizations of linac parameters, 32 MeV beam energy is obtained at the end of the linac. As high electron energy is required to produce acceptable neutron flux. The final neutron flux is estimated to be 5 × 1011 n/cm2/s/mA. Future development will be the real design of a 30 MeV electron linac based on S band traveling wave.
NASA Astrophysics Data System (ADS)
Kononenko, O.; Lopes, N. C.; Cole, J. M.; Kamperidis, C.; Mangles, S. P. D.; Najmudin, Z.; Osterhoff, J.; Poder, K.; Rusby, D.; Symes, D. R.; Warwick, J.; Wood, J. C.; Palmer, C. A. J.
2016-09-01
In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.
Group theoretical formulation of free fall and projectile motion
NASA Astrophysics Data System (ADS)
Düztaş, Koray
2018-07-01
In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.
ERIC Educational Resources Information Center
Mayhew, Matthew J.; Simonoff, Jeffrey S.
2015-01-01
The purpose of this article is to describe effect coding as an alternative quantitative practice for analyzing and interpreting categorical, race-based independent variables in higher education research. Unlike indicator (dummy) codes that imply that one group will be a reference group, effect codes use average responses as a means for…
The ZPIC educational code suite
NASA Astrophysics Data System (ADS)
Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.
2017-10-01
Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.
Li, Rui; Liao, Xian-Hua; Ye, Jun-Zhao; Li, Min-Rui; Wu, Yan-Qin; Hu, Xuan; Zhong, Bi-Hui
2017-06-14
To test the hypothesis that K8/K18 variants predispose humans to non-alcoholic fatty liver disease (NAFLD) progression and its metabolic phenotypes. We selected a total of 373 unrelated adult subjects from our Physical Examination Department, including 200 unrelated NAFLD patients and 173 controls of both genders and different ages. Diagnoses of NAFLD were established according to ultrasonic signs of fatty liver. All subjects were tested for population characteristics, lipid profile, liver tests, as well as glucose tests. Genomic DNA was obtained from peripheral blood with a DNeasy Tissue Kit. K8/K18 coding regions were analyzed, including 15 exons and exon-intron boundaries. Among 200 NAFLD patients, 10 (5%) heterozygous carriers of keratin variants were identified. There were 5 amino-acid-altering heterozygous variants and 6 non-coding heterozygous variants. One novel amino-acid-altering heterozygous variant (K18 N193S) and three novel non-coding variants were observed (K8 IVS5-9A→G, K8 IVS6+19G→A, K18 T195T). A total of 9 patients had a single variant and 1 patient had compound variants (K18 N193S+K8 IVS3-15C→G). Only one R341H variant was found in the control group (1 of 173, 0.58%). The frequency of keratin variants in NAFLD patients was significantly higher than that in the control group (5% vs 0.58%, P = 0.015). Notably, the keratin variants were significantly associated with insulin resistance (IR) in NAFLD patients (8.86% in NAFLD patients with IR vs 2.5% in NAFLD patients without IR, P = 0.043). K8/K18 variants are overrepresented in Chinese NAFLD patients and might accelerate liver fat storage through IR.
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing
Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300
ICPP: Relativistic Plasma Physics with Ultra-Short High-Intensity Laser Pulses
NASA Astrophysics Data System (ADS)
Meyer-Ter-Vehn, Juergen
2000-10-01
Recent progress in generating ultra-short high-intensity laser pulses has opened a new branch of relativistic plasma physics, which is discussed in this talk in terms of particle-in-cell (PIC) simulations. These pulses create small plasma volumes of high-density plasma with plasma fields above 10^12 V/m and 10^8 Gauss. At intensities beyond 10^18 W/cm^2, now available from table-top systems, they drive relativistic electron currents in self-focussing plasma channels. These currents are close to the Alfven limit and allow to study relativistic current filamentation. A most remarkable feature is the generation of well collimated relativistic electron beams emerging from the channels with energies up to GeV. In dense matter they trigger cascades of gamma-rays, e^+e^- pairs, and a host of nuclear and particle processes. One of the applications may be fast ignition of compressed inertial fusion targets. Above 10^23 W/cm^2, expected to be achieved in the future, solid-density matter becomes relativistically transparent for optical light, and the acceleration of protons to multi-GeV energies is predicted in plasma layers less than 1 mm thick. These results open completely new perspectives for plasma-based accelerator schemes. Three-dimensional PIC simulations turn out to be the superior tool to explore the relativistic plasma kinetics at such intensities. Results obtained with the VLPL code [1] are presented. Different mechanisms of particle acceleration are discussed. Both laser wakefield and direct laser acceleration in plasma channels (by a mechanism similar to inverse free electron lasers) have been identified. The latter describes recent MPQ experimental results. [1] A. Pukhov, J. Plasma Physics 61, 425 - 433 (1999): Three-dimensional electromagnetic relativistic particle-in-cell code VLPL (Virtual Laser Plasma Laboratory).
SU-E-T-525: Ionization Chamber Perturbation in Flattening Filter Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, D; Voigts-Rhetz, P von; Zink, K
2015-06-15
Purpose: Changing the characteristic of a photon beam by mechanically removing the flattening filter may impact the dose response of ionization chambers. Thus, perturbation factors of cylindrical ionization chambers in conventional and flattening filter free photon beams were calculated by Monte Carlo simulations. Methods: The EGSnrc/BEAMnrc code system was used for all Monte Carlo calculations. BEAMnrc models of nine different linear accelerators with and without flattening filter were used to create realistic photon sources. Monte Carlo based calculations to determine the fluence perturbations due to the presens of the chambers components, the different material of the sensitive volume (air insteadmore » of water) as well as the volume effect were performed by the user code egs-chamber. Results: Stem, central electrode, wall, density and volume perturbation factors for linear accelerators with and without flattening filter were calculated as a function of the beam quality specifier TPR{sub 20/10}. A bias between the perturbation factors as a function of TPR{sub 20/10} for flattening filter free beams and conventional linear accelerators could not be observed for the perturbations caused by the components of the ionization chamber and the sensitive volume. Conclusion: The results indicate that the well-known small bias between the beam quality correction factor as a function of TPR20/10 for the flattening filter free and conventional linear accelerators is not caused by the geometry of the detector but rather by the material of the sensitive volume. This suggest that the bias for flattening filter free photon fields is only caused by the different material of the sensitive volume (air instead of water)« less
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.
Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.
Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism
NASA Astrophysics Data System (ADS)
Zender, C. S.; Wang, W.; Vicente, P.
2013-12-01
Big Data is an ugly name for the scientific opportunities and challenges created by the growing wealth of geoscience data. How to weave large, disparate datasets together to best reveal their underlying properties, to exploit their strengths and minimize their weaknesses, to continually aggregate more information than the world knew yesterday and less than we will learn tomorrow? Data analytics techniques (statistics, data mining, machine learning, etc.) can accelerate pattern recognition and discovery. However, often researchers must, prior to analysis, organize multiple related datasets into a coherent framework. Hierarchical organization permits entire dataset to be stored in nested groups that reflect their intrinsic relationships and similarities. Hierarchical data can be simpler and faster to analyze by coding operators to automatically parallelize processes over isomorphic storage units, i.e., groups. The newest generation of netCDF Operators (NCO) embody this hierarchical approach, while still supporting traditional analysis approaches. We will use NCO to demonstrate the trade-offs involved in processing a prototypical Big Data application (analysis of CMIP5 datasets) using hierarchical and traditional analysis approaches.
NASA Astrophysics Data System (ADS)
Uzdensky, Dmitri
Relativistic astrophysical plasma environments routinely produce intense high-energy emission, which is often observed to be nonthermal and rapidly flaring. The recently discovered gamma-ray (> 100 MeV) flares in Crab Pulsar Wind Nebula (PWN) provide a quintessential illustration of this, but other notable examples include relativistic active galactic nuclei (AGN) jets, including blazars, and Gamma-ray Bursts (GRBs). Understanding the processes responsible for the very efficient and rapid relativistic particle acceleration and subsequent emission that occurs in these sources poses a strong challenge to modern high-energy astrophysics, especially in light of the necessity to overcome radiation reaction during the acceleration process. Magnetic reconnection and collisionless shocks have been invoked as possible mechanisms. However, the inferred extreme particle acceleration requires the presence of coherent electric-field structures. How such large-scale accelerating structures (such as reconnecting current sheets) can spontaneously arise in turbulent astrophysical environments still remains a mystery. The proposed project will conduct a first-principles computational and theoretical study of kinetic turbulence in relativistic collisionless plasmas with a special focus on nonthermal particle acceleration and radiation emission. The main computational tool employed in this study will be the relativistic radiative particle-in-cell (PIC) code Zeltron, developed by the team members at the Univ. of Colorado. This code has a unique capability to self-consistently include the synchrotron and inverse-Compton radiation reaction force on the relativistic particles, while simultaneously computing the resulting observable radiative signatures. This proposal envisions performing massively parallel, large-scale three-dimensional simulations of driven and decaying kinetic turbulence in physical regimes relevant to real astrophysical systems (such as the Crab PWN), including the radiation reaction effects. In addition to measuring the general fluid-level statistical properties of kinetic turbulence (e.g., the turbulent spectrum in the inertial and sub-inertial range), as well as the overall energy dissipation and particle acceleration, the proposed study will also investigate their intermittency and time variability, resulting in direction- and time-resolved emitted photon spectra and direction- and energy-resolved light curves, which can then be compared with observations. To gain deeper physical insight into the intermittent particle acceleration processes in turbulent astrophysical environments, the project will also identify and analyze statistically the current sheets, shocks, and other relevant localized particle-acceleration structures found in the simulations. In particular, it will assess whether relativistic kinetic turbulence in PWN can self-consistently generate such structures that are long and strong enough to accelerate large numbers of particles to the PeV energies required to explain the Crab gamma-ray flares, and where and under what conditions such acceleration can occur. The results of this research will also advance our understanding the origin of ultra-rapid TeV flares in blazar jets and will have important implications for GRB prompt emission, as well as AGN radio-lobes and radiatively-inefficient accretion flows, such as the flow onto the supermassive black hole at our Galactic Center.
Alternate operating scenarios for NDCX-II
NASA Astrophysics Data System (ADS)
Sharp, W. M.; Friedman, A.; Grote, D. P.; Cohen, R. H.; Lund, S. M.; Vay, J.-L.; Waldron, W. L.
2014-01-01
NDCX-II is a newly completed accelerator facility at LBNL, built to study ion-heated warm dense matter, as well as aspects of ion-driven targets and intense-beam dynamics for inertial-fusion energy. The baseline design calls for using 12 induction cells to accelerate 30-50 nC of Li+ ions to 1.2 MeV. During commissioning, though, we plan to extend the source lifetime by extracting less total charge. Over time, we expect that NDCX-II will be upgraded to substantially higher energies, necessitating the use of heavier ions to keep a suitable deposition range in targets. For operational flexibility, the option of using a helium plasma source is also being investigated. Each of these options requires development of an alternate acceleration schedule. The schedules here are worked out with a fast-running 1-D particle-in-cell code ASP.
A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.
Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming
2017-06-16
Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.
Yin, Yi; Zhong, Hui-Huang; Liu, Jin-Liang; Ren, He-Ming; Yang, Jian-Hua; Zhang, Xiao-Ping; Hong, Zhi-qiang
2010-09-01
A radial-current aqueous resistive solution load was applied to characterize a laser triggered transformer-type accelerator. The current direction in the dummy load is radial and is different from the traditional load in the axial. Therefore, this type of dummy load has smaller inductance and fast response characteristic. The load was designed to accommodate both the resistance requirement of accelerator and to allow optical access for the laser. Theoretical and numerical calculations of the load's inductance and capacitance are given. The equivalent circuit of the dummy load is calculated in theory and analyzed with a PSPICE code. The simulation results agree well with the theoretical analysis. At last, experiments of the dummy load applied to the high power spiral pulse forming line were performed; a quasisquare pulse voltage is obtained at the dummy load.
Beam dynamic simulation and optimization of the CLIC positron source and the capture linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayar, C., E-mail: cafer.bayar@cern.ch; CERN, Geneva; Doebert, S., E-mail: Steffen.Doebert@cern.ch
2016-03-25
The CLIC Positron Source is based on the hybrid target composed of a crystal and an amorphous target. Simulations have been performed from the exit of the amorphous target to the end of pre-injector linac which captures and accelerates the positrons to an energy of 200 MeV. Simulations are performed by the particle tracking code PARMELA. The magnetic field of the AMD is represented in PARMELA by simple coils. Two modes are applied in this study. The first one is accelerating mode based on acceleration after the AMD. The second one is decelerating mode based on deceleration in the first acceleratingmore » structure. It is shown that the decelerating mode gives a higher yield for the e{sup +} beam in the end of the Pre-Injector Linac.« less
NASA Astrophysics Data System (ADS)
Yin, Yi; Zhong, Hui-Huang; Liu, Jin-Liang; Ren, He-Ming; Yang, Jian-Hua; Zhang, Xiao-Ping; Hong, Zhi-qiang
2010-09-01
A radial-current aqueous resistive solution load was applied to characterize a laser triggered transformer-type accelerator. The current direction in the dummy load is radial and is different from the traditional load in the axial. Therefore, this type of dummy load has smaller inductance and fast response characteristic. The load was designed to accommodate both the resistance requirement of accelerator and to allow optical access for the laser. Theoretical and numerical calculations of the load's inductance and capacitance are given. The equivalent circuit of the dummy load is calculated in theory and analyzed with a PSPICE code. The simulation results agree well with the theoretical analysis. At last, experiments of the dummy load applied to the high power spiral pulse forming line were performed; a quasisquare pulse voltage is obtained at the dummy load.
Emittance Growth in the DARHT-II Linear Induction Accelerator
Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; ...
2017-10-03
The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less
Emittance Growth in the DARHT-II Linear Induction Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.
The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less
Feasibility of an XUV FEL Oscillator Driven by a SCRF Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Freund, H. P.; Reinsch, M.
The Advanced Superconducting Test Accelerator (ASTA) facility is currently under construction at Fermi National Accelerator Laboratory. Using a1-ms-long macropulse composed of up to 3000 micropulses, and with beam energies projected from 45 to 800 MeV, the possibility for an extreme ultraviolet (XUV) free-electron laser oscillator (FELO) with the higher energy is evaluated. We have used both GINGER with an oscillator module and the MEDUSA/OPC code to assess FELO saturation prospects at 120 nm, 40 nm, and 13.4 nm. The results support saturation at all of these wavelengths which are also shorter than the demonstrated shortest wavelength record of 176 nmmore » from a storage-ring-based FELO. This indicates linac-driven FELOs can be extended into this XUV wavelength regime previously only reached with single-pass FEL configurations.« less
Thermonuclear targets for direct-drive ignition by a megajoule laser pulse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bel’kov, S. A.; Bondarenko, S. V.; Vergunova, G. A.
2015-10-15
Central ignition of a thin two-layer-shell fusion target that is directly driven by a 2-MJ profiled pulse of Nd laser second-harmonic radiation has been studied. The parameters of the target were selected so as to provide effective acceleration of the shell toward the center, which was sufficient for the onset of ignition under conditions of increased hydrodynamic stability of the ablator acceleration and compression. The aspect ratio of the inner deuterium-tritium layer of the shell does not exceed 15, provided that a major part (above 75%) of the outer layer (plastic ablator) is evaporated by the instant of maximum compression.more » The investigation is based on two series of numerical calculations that were performed using one-dimensional (1D) hydrodynamic codes. The first 1D code was used to calculate the absorption of the profiled laser-radiation pulse (including calculation of the total absorption coefficient with allowance for the inverse bremsstrahlung and resonance mechanisms) and the spatial distribution of target heating for a real geometry of irradiation using 192 laser beams in a scheme of focusing with a cubo-octahedral symmetry. The second 1D code was used for simulating the total cycle of target evolution under the action of absorbed laser radiation and for determining the thermonuclear gain that was achieved with a given target.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Comparison of H.265/HEVC encoders
NASA Astrophysics Data System (ADS)
Trochimiuk, Maciej
2016-09-01
The H.265/HEVC is the state-of-the-art video compression standard, which allows the bitrate reduction up to 50% compared with its predecessor, H.264/AVC, maintaining equal perceptual video quality. The growth in coding efficiency was achieved by increasing the number of available intra- and inter-frame prediction features and improvements in existing ones, such as entropy encoding and filtering. Nevertheless, to achieve real-time performance of the encoder, simplifications in algorithm are inevitable. Some features and coding modes shall be skipped, to reduce time needed to evaluate modes forwarded to rate-distortion optimisation. Thus, the potential acceleration of the encoding process comes at the expense of coding efficiency. In this paper, a trade-off between video quality and encoding speed of various H.265/HEVC encoders is discussed.
NASA Technical Reports Server (NTRS)
Ghosh, Amrit Raj
1996-01-01
The viscous, Navier-Stokes solver for turbomachinery applications, MSUTC has been modified to include the rotating frame formulation. The three-dimensional thin-layer Navier-Stokes equations have been cast in a rotating Cartesian frame enabling the freezing of grid motion. This also allows the flow-field associated with an isolated rotor to be viewed as a steady-state problem. Consequently, local time stepping can be used to accelerate convergence. The formulation is validated by running NASA's Rotor 67 as the test case. results are compared between the rotating frame code and the absolute frame code. The use of the rotating frame approach greatly enhances the performance of the code with respect to savings in computing time, without degradation of the solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasso, A.; Ferrari, A.; Ferrari, A.
In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, andmore » with the SLAC data.« less
Distribution of the background gas in the MITICA accelerator
NASA Astrophysics Data System (ADS)
Sartori, E.; Dal Bello, S.; Serianni, G.; Sonato, P.
2013-02-01
MITICA is the ITER neutral beam test facility to be built in Padova for the generation of a 40A D- ion beam with a 16×5×16 array of 1280 beamlets accelerated to 1MV. The background gas pressure distribution and the particle flows inside MITICA accelerator are critical aspects for stripping losses, generation of secondary particles and beam non-uniformities. To keep the stripping losses in the extraction and acceleration stages reasonably low, the source pressure should be 0.3 Pa or less. The gas flow in MITICA accelerator is being studied using a 3D Finite Element code, named Avocado. The gas-wall interaction model is based on the cosine law, and the whole vacuum system geometry is represented by a view factor matrix based on surface discretization and gas property definitions. Pressure distribution and mutual fluxes are then solved linearly. In this paper the result of a numerical simulation is presented, showing the steady-state pressure distribution inside the accelerator when gas enters the system at room temperature. The accelerator model is limited to a horizontal slice 400 mm high (1/4 of the accelerator height). The pressure profile at solid walls and through the beamlet axis is obtained, allowing the evaluation and the discussion of the background gas distribution and nonuniformity. The particle flux at the inlet and outlet boundaries (namely the grounded grid apertures and the lateral conductances respectively) will be discussed.
A primary standard for low-g shock calibration by laser interferometry
NASA Astrophysics Data System (ADS)
Sun, Qiao; Wang, Jian-lin; Hu, Hong-bo
2014-07-01
This paper presents a novel implementation of a primary standard for low-g shock acceleration calibration by laser interferometry based on rigid body collision at National Institute of Metrology, China. The mechanical structure of the standard device and working principles involved in the shock acceleration exciter, laser interferometers and virtual instruments are described. The novel combination of an electromagnetic exciter and a pneumatic exciter as the mechanical power supply of the standard device can deliver a wide range of shock acceleration levels. In addition to polyurethane rubber, two other types of material are investigated to ensure a wide selection of cushioning pads for shock pulse generation, with pulse shapes and data displayed. A heterodyne He-Ne laser interferometer is preferred for its precise and reliable measurement of shock acceleration while a homodyne one serves as a check standard. Some calibration results of a standard acceleration measuring chain are shown in company with the uncertainty evaluation budget. The expanded calibration uncertainty of shock sensitivity of the acceleration measuring chain is 0.8%, k = 2, with the peak acceleration range from 20 to 10 000 m s-2 and pulse duration from 0.5 to 10 ms. This primary shock standard can meet the traceability requirements of shock acceleration from various applications of industries from automobile to civil engineering and therefore is used for piloting the ongoing shock comparison of Technical Committee of Acoustics, Ultrasound and Vibration (TCAUV) of Asia Pacific Metrology Program (APMP), coded as APMP.AUV.V-P1.
Numerical predictions of EML (electromagnetic launcher) system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.
1987-01-01
The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for themore » rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.« less
Isochronous (CW) Non-Scaling FFAGs: Design and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstone, C.; Berz, M.; Makino, K.
2010-11-04
The drive for higher beam power, high duty cycle, and reliable beams at reasonable cost has focused international attention and design effort on fixed field accelerators, notably Fixed-Field Alternating Gradient accelerators (FFAGs). High-intensity GeV proton drivers encounter duty cycle and space-charge limits in the synchrotron and machine size concerns in the weaker-focusing cyclotrons. A 10-20 MW proton driver is challenging, if even technically feasible, with conventional accelerators--with the possible exception of a SRF linac, which has a large associated cost and footprint. Recently, the concept of isochronous orbits has been explored and developed for nonscaling FFAGs using powerful new methodologiesmore » in FFAG accelerator design and simulation. The property of isochronous orbits enables the simplicity of fixed RF and, by tailoring a nonlinear radial field profile, the FFAG can remain isochronous beyond the energy reach of cyclotrons, well into the relativistic regime. With isochronous orbits, the machine proposed here has the high average current advantage and duty cycle of the cyclotron in combination with the strong focusing, smaller losses, and energy variability that are more typical of the synchrotron. This paper reports on these new advances in FFAG accelerator technology and presents advanced modeling tools for fixed-field accelerators unique to the code COSY INFINITY.« less
Sensitivity Analysis of the Off-Normal Conditions of the SPIDER Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veltri, P.; Agostinetti, P.; Antoni, V.
2011-09-26
In the context of the development of the 1 MV neutral beam injector for the ITER tokamak, the study on beam formation and acceleration has considerable importance. This effort includes the ion source and accelerator SPIDER (Source for Production of Ions of Deuterium Extracted from an Rf plasma) ion source, planned to be built in Padova, and designed to extract and accelerate a 355 A/m{sup 2} current of H{sup -}(or 285 A/m{sup 2} D{sup -}) up to 100 kV. Exhaustive simulations were already carried out during the accelerator optimization leading to the present design. However, as it is expected thatmore » the accelerator shall operate also in case of pre-programmed or undesired off-normal conditions, the investigation of a large set of off-normal scenarios is necessary. These analyses will also be useful for the evaluation of the real performances of the machine, and should help in interpreting experimental results, or in identifying dangerous operating conditions.The present contribution offers an overview of the results obtained during the investigation of these off-normal conditions, by means of different modeling tools and codes. The results, showed a good flexibility of the device in different operating conditions. Where the consequences of the abnormalities appeared to be problematic further analysis were addressed.« less
Testing cosmic ray acceleration with radio relics: a high-resolution study using MHD and tracers
NASA Astrophysics Data System (ADS)
Wittor, D.; Vazza, F.; Brüggen, M.
2017-02-01
Weak shocks in the intracluster medium may accelerate cosmic-ray protons and cosmic-ray electrons differently depending on the angle between the upstream magnetic field and the shock normal. In this work, we investigate how shock obliquity affects the production of cosmic rays in high-resolution simulations of galaxy clusters. For this purpose, we performed a magnetohydrodynamical simulation of a galaxy cluster using the mesh refinement code ENZO. We use Lagrangian tracers to follow the properties of the thermal gas, the cosmic rays and the magnetic fields over time. We tested a number of different acceleration scenarios by varying the obliquity-dependent acceleration efficiencies of protons and electrons, and by examining the resulting hadronic γ-ray and radio emission. We find that the radio emission does not change significantly if only quasi-perpendicular shocks are able to accelerate cosmic-ray electrons. Our analysis suggests that radio-emitting electrons found in relics have been typically shocked many times before z = 0. On the other hand, the hadronic γ-ray emission from clusters is found to decrease significantly if only quasi-parallel shocks are allowed to accelerate cosmic ray protons. This might reduce the tension with the low upper limits on γ-ray emission from clusters set by the Fermi satellite.
Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S
2014-03-11
Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.
Short-term memory coding in children with intellectual disabilities.
Henry, Lucy
2008-05-01
To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.
Yoo, Won-Gyu
2015-01-01
[Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.
Progress toward openness, transparency, and reproducibility in cognitive neuroscience.
Gilmore, Rick O; Diaz, Michele T; Wyble, Brad A; Yarkoni, Tal
2017-05-01
Accumulating evidence suggests that many findings in psychological science and cognitive neuroscience may prove difficult to reproduce; statistical power in brain imaging studies is low and has not improved recently; software errors in analysis tools are common and can go undetected for many years; and, a few large-scale studies notwithstanding, open sharing of data, code, and materials remain the rare exception. At the same time, there is a renewed focus on reproducibility, transparency, and openness as essential core values in cognitive neuroscience. The emergence and rapid growth of data archives, meta-analytic tools, software pipelines, and research groups devoted to improved methodology reflect this new sensibility. We review evidence that the field has begun to embrace new open research practices and illustrate how these can begin to address problems of reproducibility, statistical power, and transparency in ways that will ultimately accelerate discovery. © 2017 New York Academy of Sciences.
Effect of physical training in cool and hot environments on +Gz acceleration tolerance in women
NASA Technical Reports Server (NTRS)
Brock, P. J.; Sciaraffa, D.; Greenleaf, J. E.
1982-01-01
Acceleration tolerance, plasma volume, and maximal oxygen uptake were measured in 15 healthy women before and after submaximal isotonic exercise training periods in cool and hot environments. The women were divided on the basis of age, maximal oxygen uptake, and +Gz tolerance into three groups: a group that exercised in heat (40.6 C), a group that exercised at a lower temperature (18.7 C), and a sedentary control group that functioned in the cool environment. There was no significant change in the +Gz tolerance in any group after training, and terminal heart rates were similar within each group. It is concluded that induction of moderate acclimation responses without increases in sweat rate or resting plasma volume has no influence on +Gz acceleration tolerance in women.
A Peer Helpers Code of Behavior.
ERIC Educational Resources Information Center
de Rosenroll, David A.
This document presents a guide for developing a peer helpers code of behavior. The first section discusses issues relevant to the trainers. These issues include whether to give a model directly to the group or whether to engender "ownership" of the code by the group; timing of introduction of the code; and addressing the issue of…
Outdoor Test Facility and Related Facilities | Photovoltaic Research | NREL
advanced or emerging photovoltaic (PV) technologies under simulated, accelerated indoor and outdoor, and evaluate prototype, pre-commercial, and commercial PV modules. One of the major roles of researchers at the OTF is to work with industry to develop uniform and consensus standards and codes for testing PV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Jeffrey
Tango enables the accelerated numerical solution of the multiscale problem of self-consistent transport and turbulence. Fast turbulence results in fluxes of heat and particles that slowly change the mean profiles of temperature and density. The fluxes are computed by separate turbulence simulation codes; Tang solves for the self-consistent change in mean temperature or density given those fluxes.
1994-12-01
Army Research Laboratory ATTN: AMSRL-WT-PA Aberdeen Proving Ground, MD 21005-5066 9 . SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING...8 1.5 DISTANCE vs. TIME CALCULATION ........................................... 9 2. D ISCU SSIO N...21 Figure 9 : Comparison of calculated thrust curves ..................................... 32 v
Laser generated Ge ions accelerated by additional electrostatic field for implantation technology
NASA Astrophysics Data System (ADS)
Rosinski, M.; Gasior, P.; Fazio, E.; Ando, L.; Giuffrida, L.; Torrisi, L.; Parys, P.; Mezzasalma, A. M.; Wolowski, J.
2013-05-01
The paper presents research on the optimization of the laser ion implantation method with electrostatic acceleration/deflection including numerical simulations by the means of the Opera 3D code and experimental tests at the IPPLM, Warsaw. To introduce the ablation process an Nd:YAG laser system with repetition rate of 10 Hz, pulse duration of 3.5 ns and pulse energy of 0.5 J has been applied. Ion time of flight diagnostics has been used in situ to characterize concentration and energy distribution in the obtained ion streams while the postmortem analysis of the implanted samples was conducted by the means of XRD, FTIR and Raman Spectroscopy. In the paper the predictions of the Opera 3D code are compared with the results of the ion diagnostics in the real experiment. To give the whole picture of the method, the postmortem results of the XRD, FTIR and Raman characterization techniques are discussed. Experimental results show that it is possible to achieve the development of a micrometer-sized crystalline Ge phase and/or an amorphous one only after a thermal annealing treatment.
Bidirectional tornado modes on the Joint European Torus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandquist, P.; Sharapov, S. E.; Lisak, M.
In discharges on the Joint European Torus [P. H. Rebut and B. E. Keen, Fusion Technol. 11, 13 (1987)] with safety factor q(0)<1 and high-power ion cyclotron resonance heating (ICRH), monster sawtooth crashes are preceded by frequency sweeping 'tornado modes' in the toroidal Alfven eigenmode frequency range. A suite of equilibrium and spectral magnetohydrodynamical codes is used for explaining the observed evolution of the tornado mode frequency and for identifying temporal evolution of the safety factor inside the q=1 radius just before sawtooth crashes. In some cases, the tornado modes are observed simultaneously with both positive and negative toroidal modemore » numbers. Hence, a free energy source other than the radial gradient of the energetic ion pressure exciting these modes is sought. The distribution function of the ICRH-accelerated ions is assessed with the SELFO code [J. Hedin et al., Nucl. Fusion 42, 527 (2002)] and energetic particle drive due to the velocity space anisotropy of ICRH-accelerated ions is considered analytically as the possible source for excitation of bidirectional tornado modes.« less
Beam tracking simulation in the central region of a 13 MeV PET cyclotron
NASA Astrophysics Data System (ADS)
Anggraita, Pramudita; Santosa, Budi; Taufik, Mulyani, Emy; Diah, Frida Iswinning
2012-06-01
This paper reports the trajectories simulation of proton beam in the central region of a 13 MeV PET cyclotron, operating with negative proton beam (for easier beam extraction using a stripper foil), 40 kV peak accelerating dee voltage at fourth harmonic frequency of 77.88 MHz, and average magnetic field of 1.275 T. The central region covers fields of 240mm × 240mm × 30mm size at 1mm resolution. The calculation was also done at finer 0.25mm resolution covering fields of 30mm × 30mm × 4mm size to see the effects of 0.55mm horizontal width of the ion source window and the halted trajectories of positive proton beam. The simulations show up to 7 turns of orbital trajectories, reaching about 1 MeV of beam energy. The distribution of accelerating electric fields and magnetic fields inside the cyclotron were calculated in 3 dimension using Opera3D code and Tosca modules for static magnetic and electric fields. The trajectory simulation was carried out using Scilab 5.3.3 code.
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Davis, J P; Akella, S; Waddell, P H
2004-01-01
Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.
FBILI method for multi-level line transfer
NASA Astrophysics Data System (ADS)
Kuzmanovska, O.; Atanacković, O.; Faurobert, M.
2017-07-01
Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.
Luo, W; Yu, T P; Chen, M; Song, Y M; Zhu, Z C; Ma, Y Y; Zhuo, H B
2014-12-29
Generation of attosecond x-ray pulse attracts more and more attention within the advanced light source user community due to its potentially wide applications. Here we propose an all-optical scheme to generate bright, attosecond hard x-ray pulse trains by Thomson backscattering of similarly structured electron beams produced in a vacuum channel by a tightly focused laser pulse. Design parameters for a proof-of-concept experiment are presented and demonstrated by using a particle-in-cell code and a four-dimensional laser-Compton scattering simulation code to model both the laser-based electron acceleration and Thomson scattering processes. Trains of 200 attosecond duration hard x-ray pulses holding stable longitudinal spacing with photon energies approaching 50 keV and maximum achievable peak brightness up to 1020 photons/s/mm2/mrad2/0.1%BW for each micro-bunch are observed. The suggested physical scheme for attosecond x-ray pulse trains generation may directly access the fastest time scales relevant to electron dynamics in atoms, molecules and materials.
Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.
2014-07-01
Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-13
....Regulations.gov ; or to VA's OMB Desk Officer, OMB Human Resources and Housing Branch, New Executive Office...' Group Life Insurance (SGLI) or Veterans' Group Life Insurance (VGLI) prior to death. If the insured...' Group Life Insurance Accelerated Benefits Option application. The application must include a medical...
Asher, Elad; Reuveni, Haim; Shlomo, Nir; Gerber, Yariv; Beigel, Roy; Narodetski, Michael; Eldar, Michael; Or, Jacob; Hod, Hanoch; Shamiss, Arie; Matetzky, Shlomi
2015-01-01
The aim of this study was to compare in patients presenting with acute chest pain the clinical outcomes and cost-effectiveness of an accelerated diagnostic protocol utilizing contemporary technology in a chest pain unit versus routine care in an internal medicine department. Hospital and 90-day course were prospectively studied in 585 consecutive low-moderate risk acute chest pain patients, of whom 304 were investigated in a designated chest pain center using a pre-specified accelerated diagnostic protocol, while 281 underwent routine care in an internal medicine ward. Hospitalization was longer in the routine care compared with the accelerated diagnostic protocol group (p<0.001). During hospitalization, 298 accelerated diagnostic protocol patients (98%) vs. 57 (20%) routine care patients underwent non-invasive testing, (p<0.001). Throughout the 90-day follow-up, diagnostic imaging testing was performed in 125 (44%) and 26 (9%) patients in the routine care and accelerated diagnostic protocol patients, respectively (p<0.001). Ultimately, most patients in both groups had non-invasive imaging testing. Accelerated diagnostic protocol patients compared with those receiving routine care was associated with a lower incidence of readmissions for chest pain [8 (3%) vs. 24 (9%), p<0.01], and acute coronary syndromes [1 (0.3%) vs. 9 (3.2%), p<0.01], during the follow-up period. The accelerated diagnostic protocol remained a predictor of lower acute coronary syndromes and readmissions after propensity score analysis [OR = 0.28 (CI 95% 0.14-0.59)]. Cost per patient was similar in both groups [($2510 vs. $2703 for the accelerated diagnostic protocol and routine care group, respectively, (p = 0.9)]. An accelerated diagnostic protocol is clinically superior and as cost effective as routine in acute chest pain patients, and may save time and resources.
Asher, Elad; Reuveni, Haim; Shlomo, Nir; Gerber, Yariv; Beigel, Roy; Narodetski, Michael; Eldar, Michael; Or, Jacob; Hod, Hanoch; Shamiss, Arie; Matetzky, Shlomi
2015-01-01
Aims The aim of this study was to compare in patients presenting with acute chest pain the clinical outcomes and cost-effectiveness of an accelerated diagnostic protocol utilizing contemporary technology in a chest pain unit versus routine care in an internal medicine department. Methods and Results Hospital and 90-day course were prospectively studied in 585 consecutive low-moderate risk acute chest pain patients, of whom 304 were investigated in a designated chest pain center using a pre-specified accelerated diagnostic protocol, while 281 underwent routine care in an internal medicine ward. Hospitalization was longer in the routine care compared with the accelerated diagnostic protocol group (p<0.001). During hospitalization, 298 accelerated diagnostic protocol patients (98%) vs. 57 (20%) routine care patients underwent non-invasive testing, (p<0.001). Throughout the 90-day follow-up, diagnostic imaging testing was performed in 125 (44%) and 26 (9%) patients in the routine care and accelerated diagnostic protocol patients, respectively (p<0.001). Ultimately, most patients in both groups had non-invasive imaging testing. Accelerated diagnostic protocol patients compared with those receiving routine care was associated with a lower incidence of readmissions for chest pain [8 (3%) vs. 24 (9%), p<0.01], and acute coronary syndromes [1 (0.3%) vs. 9 (3.2%), p<0.01], during the follow-up period. The accelerated diagnostic protocol remained a predictor of lower acute coronary syndromes and readmissions after propensity score analysis [OR = 0.28 (CI 95% 0.14–0.59)]. Cost per patient was similar in both groups [($2510 vs. $2703 for the accelerated diagnostic protocol and routine care group, respectively, (p = 0.9)]. Conclusion An accelerated diagnostic protocol is clinically superior and as cost effective as routine in acute chest pain patients, and may save time and resources. PMID:25622029
Khoshkholgh, Roghaie; Keshavarz, Tahereh; Moshfeghy, Zeinab; Akbarzadeh, Marzieh; Asadi, Nasrin; Zare, Najaf
2016-01-01
Objective: To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012. Materials and methods: In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control) each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test. Results: The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945) and second (p = 0.763) 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013) and auditory intervention for fetus groups (p < 0.001). The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002). Conclusion: Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue. PMID:27385971
Khoshkholgh, Roghaie; Keshavarz, Tahereh; Moshfeghy, Zeinab; Akbarzadeh, Marzieh; Asadi, Nasrin; Zare, Najaf
2016-03-01
To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012. In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control) each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test. The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945) and second (p = 0.763) 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013) and auditory intervention for fetus groups (p < 0.001). The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002). Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue.
Arida, Janet A; Bressler, Toby; Moran, Samantha; DʼArpino, Sara; Carr, Alaina; Hagan, Teresa L
2018-02-27
Mothers with ovarian cancer are at risk of experiencing additional demands given their substantial symptom burden and accelerated disease progression. This study describes the experience of mothers with ovarian cancer, elucidating the interaction between their roles as mothers and patients with cancer. We conducted a secondary analysis of focus groups with women with advanced ovarian cancer. Using descriptive coding, we developed a coding framework based on emerging findings and group consensus. We then identified higher-order themes capturing the breadth of experiences described by mothers with ovarian cancer. Eight of the 13 participants discussed motherhood. The mean age of participants was 48.38 (SD, 7.17) years. All women were white (9/9), most had some college education (6/9), and the majority were married (5/9). Mean time since diagnosis was 7.43 (SD, 4.69) months; more than half of women (5/9) were currently receiving treatment. Themes and exemplar quotes reflected participants' evolving self-identities from healthy mother to cancer patient to woman mothering with cancer. Subthemes related to how motherhood was impacted by symptoms, demands of treatment, and the need to gain acceptance of living with cancer. The experience of motherhood impacts how women experience cancer and how they evolve as survivors. Similarly, cancer influences mothering. Healthcare providers should understand and address the needs of mothers with ovarian cancer. This study adds to the limited literature in this area and offers insight into the unique needs faced by women mothering while facing advanced cancer.
Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S
2013-12-01
To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Shope, S. L.; Mazarakis, M. G.; Frost, C. A.; Poukey, J. W.; Turman, B. N.
Self Magnetically Insulated Transmission Lines (MITL) adders were used successfully in a number of Sandia accelerators such as HELIA, HERMES III, and SABRE. Most recently we used at MITL adder in the RADLAC/SMILE electron beam accelerator to produce high quality, small radius (r(sub rho) less than 2 cm), 11 - 15 MeV, 50 - 100-kA beams with a small transverse velocity v(perpendicular)/c = beta(perpendicular) less than or equal to 0.1. In RADLAC/SMILE, a coaxial MITL passed through the eight, 2 MV vacuum envelopes. The MITL summed the voltages of all eight feeds to a single foilless diode. The experimental results are in good agreement with code simulations. Our success with the MITL technology led us to investigate the application to higher energy accelerator designs. We have a conceptual design for a cavity-fed MITL that sums the voltages from 100 identical, inductively-isolated cavities. Each cavity is a toroidal structure that is driven simultaneously by four 8-ohm pulse-forming lines, providing a 1-MV voltage pulse to each of the 100 cavities. The point design accelerator is 100 MV, 500 kA, with a 30 - 50 ns FWHM output pulse.
Embedded Streaming Deep Neural Networks Accelerator With Applications.
Dundar, Aysegul; Jin, Jonghoon; Martini, Berin; Culurciello, Eugenio
2017-07-01
Deep convolutional neural networks (DCNNs) have become a very powerful tool in visual perception. DCNNs have applications in autonomous robots, security systems, mobile phones, and automobiles, where high throughput of the feedforward evaluation phase and power efficiency are important. Because of this increased usage, many field-programmable gate array (FPGA)-based accelerators have been proposed. In this paper, we present an optimized streaming method for DCNNs' hardware accelerator on an embedded platform. The streaming method acts as a compiler, transforming a high-level representation of DCNNs into operation codes to execute applications in a hardware accelerator. The proposed method utilizes maximum computational resources available based on a novel-scheduled routing topology that combines data reuse and data concatenation. It is tested with a hardware accelerator implemented on the Xilinx Kintex-7 XC7K325T FPGA. The system fully explores weight-level and node-level parallelizations of DCNNs and achieves a peak performance of 247 G-ops while consuming less than 4 W of power. We test our system with applications on object classification and object detection in real-world scenarios. Our results indicate high-performance efficiency, outperforming all other presented platforms while running these applications.
Direct Laser Acceleration in Laser Wakefield Accelerators
NASA Astrophysics Data System (ADS)
Shaw, J. L.; Froula, D. H.; Marsh, K. A.; Joshi, C.; Lemos, N.
2017-10-01
The direct laser acceleration (DLA) of electrons in a laser wakefield accelerator (LWFA) has been investigated. We show that when there is a significant overlap between the drive laser and the trapped electrons in a LWFA cavity, the accelerating electrons can gain energy from the DLA mechanism in addition to LWFA. The properties of the electron beams produced in a LWFA, where the electrons are injected by ionization injection, have been investigated using particle-in-cell (PIC) code simulations. Particle tracking was used to demonstrate the presence of DLA in LWFA. Further PIC simulations comparing LWFA with and without DLA show that the presence of DLA can lead to electron beams that have maximum energies that exceed the estimates given by the theory for the ideal blowout regime. The magnitude of the contribution of DLA to the energy gained by the electron was found to be on the order of the LWFA contribution. The presence of DLA in a LWFA can also lead to enhanced betatron oscillation amplitudes and increased divergence in the direction of the laser polarization. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
NASA Astrophysics Data System (ADS)
Watanabe, Yukinobu; Kin, Tadahiro; Araki, Shouhei; Nakayama, Shinsuke; Iwamoto, Osamu
2017-09-01
A comprehensive research program on deuteron nuclear data motivated by development of accelerator-based neutron sources is being executed. It is composed of measurements of neutron and gamma-ray yields and production cross sections, modelling of deuteron-induced reactions and code development, nuclear data evaluation and benchmark test, and its application to medical radioisotopes production. The goal of this program is to develop a state-of-the-art deuteron nuclear data library up to 200 MeV which will be useful for the design of future (d,xn) neutron sources. The current status and future plan are reviewed.
Emittance Growth in the DARHT-II Linear Induction Accelerator
NASA Astrophysics Data System (ADS)
Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; McCuistian, B. Trent; Mostrom, Christopher B.; Schulze, Martin E.; Thoma, Carsten H.
2017-11-01
The Dual-Axis Radiographic Hydrotest (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. Some of the possible causes for the emittance growth in the DARHT LIA have been investigated using particle-in-cell (PIC) codes, and are discussed in this article. The results suggest that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.
A Bonner Sphere Spectrometer with extended response matrix
NASA Astrophysics Data System (ADS)
Birattari, C.; Dimovasili, E.; Mitaroff, A.; Silari, M.
2010-08-01
This paper describes the design, calibration and applications at high-energy accelerators of an extended-range Bonner Sphere neutron Spectrometer (BSS). The BSS was designed by the FLUKA Monte Carlo code, investigating several combinations of materials and diameters of the moderators for the high-energy channels. The system was calibrated at PTB in Braunschweig, Germany, using monoenergetic neutron beams in the energy range 144 keV-19 MeV. It was subsequently tested with Am-Be source neutrons and in the simulated workplace neutron field at CERF (the CERN-EU high-energy reference field facility). Since 2002, it has been employed for neutron spectral measurements around CERN accelerators.
Solenoid Fringe Field Effects for the Neutrino Factory Linac - MAD-X Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Aslaninejad,C. Bontoiu,J. Pasternak,J. Pozimski,Alex Bogacz
2010-05-01
International Design Study for the Neutrino Factory (IDS-NF) assumes the first stage of muon acceleration (up to 900 MeV) to be implemented with a solenoid based Linac. The Linac consists of three styles of cryo-modules, containing focusing solenoids and varying number of SRF cavities for acceleration. Fringe fields of the solenoids and the focusing effects in the SRF cavities have significant impact on the transverse beam dynamics. Using an analytical formula, the effects of fringe fields are studied in MAD-X. The resulting betatron functions are compared with the results of beam dynamics simulations using OptiM code.
NASA Astrophysics Data System (ADS)
da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.
2014-05-01
Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.
1999-01-01
The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.
ERIC Educational Resources Information Center
Steenbergen-Hu, Saiying; Makel, Matthew C.; Olszewski-Kubilius, Paula
2016-01-01
Two second-order meta-analyses synthesized approximately 100 years of research on the effects of ability grouping and acceleration on K-12 students' academic achievement. Outcomes of 13 ability grouping meta-analyses showed that students benefited from within-class grouping (0.19 = g = 0.30), cross-grade subject grouping (g = 0.26), and special…
Cooper, Christopher D; Bardhan, Jaydeep P; Barba, L A
2014-03-01
The continuum theory applied to biomolecular electrostatics leads to an implicit-solvent model governed by the Poisson-Boltzmann equation. Solvers relying on a boundary integral representation typically do not consider features like solvent-filled cavities or ion-exclusion (Stern) layers, due to the added difficulty of treating multiple boundary surfaces. This has hindered meaningful comparisons with volume-based methods, and the effects on accuracy of including these features has remained unknown. This work presents a solver called PyGBe that uses a boundary-element formulation and can handle multiple interacting surfaces. It was used to study the effects of solvent-filled cavities and Stern layers on the accuracy of calculating solvation energy and binding energy of proteins, using the well-known apbs finite-difference code for comparison. The results suggest that if required accuracy for an application allows errors larger than about 2% in solvation energy, then the simpler, single-surface model can be used. When calculating binding energies, the need for a multi-surface model is problem-dependent, becoming more critical when ligand and receptor are of comparable size. Comparing with the apbs solver, the boundary-element solver is faster when the accuracy requirements are higher. The cross-over point for the PyGBe code is in the order of 1-2% error, when running on one gpu card (nvidia Tesla C2075), compared with apbs running on six Intel Xeon cpu cores. PyGBe achieves algorithmic acceleration of the boundary element method using a treecode, and hardware acceleration using gpus via PyCuda from a user-visible code that is all Python. The code is open-source under MIT license.
Ultra-high-energy cosmic rays from low-luminosity active galactic nuclei
NASA Astrophysics Data System (ADS)
Duţan, Ioana; Caramete, Laurenţiu I.
2015-03-01
We investigate the production of ultra-high-energy cosmic ray (UHECR) in relativistic jets from low-luminosity active galactic nuclei (LLAGN). We start by proposing a model for the UHECR contribution from the black holes (BHs) in LLAGN, which present a jet power Pj ⩽1046 erg s-1. This is in contrast to the opinion that only high-luminosity AGN can accelerate particles to energies ⩾ 50 EeV. We rewrite the equations which describe the synchrotron self-absorbed emission of a non-thermal particle distribution to obtain the observed radio flux density from sources with a flat-spectrum core and its relationship to the jet power. We found that the UHECR flux is dependent on the observed radio flux density, the distance to the AGN, and the BH mass, where the particle acceleration regions can be sustained by the magnetic energy extraction from the BH at the center of the AGN. We use a complete sample of 29 radio sources with a total flux density at 5 GHz greater than 0.5 Jy to make predictions for the maximum particle energy, luminosity, and flux of the UHECRs from nearby AGN. These predictions are then used in a semi-analytical code developed in Mathematica (SAM code) as inputs for the Monte-Carlo simulations to obtain the distribution of the arrival direction at the Earth and the energy spectrum of the UHECRs, taking into account their deflection in the intergalactic magnetic fields. For comparison, we also use the CRPropa code with the same initial conditions as for the SAM code. Importantly, to calculate the energy spectrum we also include the weighting of the UHECR flux per each UHECR source. Next, we compare the energy spectrum of the UHECRs with that obtained by the Pierre Auger Observatory.
NASA Astrophysics Data System (ADS)
Cooper, Christopher D.; Bardhan, Jaydeep P.; Barba, L. A.
2014-03-01
The continuum theory applied to biomolecular electrostatics leads to an implicit-solvent model governed by the Poisson-Boltzmann equation. Solvers relying on a boundary integral representation typically do not consider features like solvent-filled cavities or ion-exclusion (Stern) layers, due to the added difficulty of treating multiple boundary surfaces. This has hindered meaningful comparisons with volume-based methods, and the effects on accuracy of including these features has remained unknown. This work presents a solver called PyGBe that uses a boundary-element formulation and can handle multiple interacting surfaces. It was used to study the effects of solvent-filled cavities and Stern layers on the accuracy of calculating solvation energy and binding energy of proteins, using the well-known
Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime
NASA Astrophysics Data System (ADS)
Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter
2012-08-01
Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, P.; /Fermilab; Cary, J.
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less
Method of accelerating photons by a relativistic plasma wave
Dawson, John M.; Wilks, Scott C.
1990-01-01
Photons of a laser pulse have their group velocity accelerated in a plasma as they are placed on a downward density gradient of a plasma wave of which the phase velocity nearly matches the group velocity of the photons. This acceleration results in a frequency upshift. If the unperturbed plasma has a slight density gradient in the direction of propagation, the photon frequencies can be continuously upshifted to significantly greater values.
El Emam, Dalia Sabry; Farag, Rania Kamel; Abouelkheir, Hossam Youssef
2016-01-01
Aim. To compare objective and subjective outcome after simultaneous wave front guided (WFG) PRK and accelerated corneal cross-linking (CXL) in patients with progressive keratoconus versus sequential WFG PRK 6 months after CXL. Methods. 62 eyes with progressive keratoconus were divided into two groups; the first including 30 eyes underwent simultaneous WFG PRK with accelerated CXL. The second including 32 eyes underwent subsequent WFG PRK performed 6 months later after accelerated CXL. Visual, refractive, topographic, and aberrometric data were determined preoperatively and during 1-year follow-up period and the results compared in between the 2 studied groups. Results. All evaluated visual, refractive, and aberrometric parameters demonstrated highly significant improvement in both studied groups (all P < 0.001). A significant improvement was observed in keratometric and Q values. The improvement in all parameters was stable till the end of follow-up. Likewise, no significant difference was determined in between the 2 groups in any of recorded parameters. Subjective data revealed similarly significant improvement in both groups. Conclusions. WFG PRK and accelerated CXL is an effective and safe option to improve the vision in mild to moderate keratoconus. In one-year follow-up, there is no statistically significant difference between the simultaneous and sequential procedure. PMID:28127465
Abou Samra, Waleed Ali; El Emam, Dalia Sabry; Farag, Rania Kamel; Abouelkheir, Hossam Youssef
2016-01-01
Aim . To compare objective and subjective outcome after simultaneous wave front guided (WFG) PRK and accelerated corneal cross-linking (CXL) in patients with progressive keratoconus versus sequential WFG PRK 6 months after CXL. Methods . 62 eyes with progressive keratoconus were divided into two groups; the first including 30 eyes underwent simultaneous WFG PRK with accelerated CXL. The second including 32 eyes underwent subsequent WFG PRK performed 6 months later after accelerated CXL. Visual, refractive, topographic, and aberrometric data were determined preoperatively and during 1-year follow-up period and the results compared in between the 2 studied groups. Results . All evaluated visual, refractive, and aberrometric parameters demonstrated highly significant improvement in both studied groups (all P < 0.001). A significant improvement was observed in keratometric and Q values. The improvement in all parameters was stable till the end of follow-up. Likewise, no significant difference was determined in between the 2 groups in any of recorded parameters. Subjective data revealed similarly significant improvement in both groups. Conclusions . WFG PRK and accelerated CXL is an effective and safe option to improve the vision in mild to moderate keratoconus. In one-year follow-up, there is no statistically significant difference between the simultaneous and sequential procedure.
Zhang, Chen; Yun, Sining; Li, Xue; Wang, Ziqi; Xu, Hongfei; Du, Tingting
2018-05-11
To improve the methane yield and digestate utilization of anaerobic digestion (AD), low-cost composited accelerants consisting of urea (0.2-0.5%), bentonite (0.5-0.8%), active carbon (0.6-0.9%), and plant ash (0.01-0.3%) were designed and tested in batch experiments. Total biogas yield (485.7-681.9 mL/g VS) and methane content (63.0-66.6%) were remarkably enhanced in AD systems by adding accelerants compared to those of control group (361.9 mL/g VS, 59.4%). Composited accelerant addition led to the highest methane yield (454.1 mL/g VS), more than double that of control group. The TS, VS, and CODt removal rates (29.7-55.3%, 50.9-63.0%, and 46.8-69.1%) for AD with accelerants were much higher than control group (26.2%, 37.1%, and 39.6%). The improved digestate stability and enhanced fertilizer nutrient content (4.95-5.66%) confirmed that the digestate of AD systems with composited accelerants could safely serve as a potential component of bioorganic fertilizer. These findings open innovative avenues in composited accelerant development and application. Copyright © 2018 Elsevier Ltd. All rights reserved.