Science.gov

Sample records for accelerator simulation paradigm

  1. Simulation Accelerator

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under a NASA SBIR (Small Business Innovative Research) contract, (NAS5-30905), EAI Simulation Associates, Inc., developed a new digital simulation computer, Starlight(tm). With an architecture based on the analog model of computation, Starlight(tm) outperforms all other computers on a wide range of continuous system simulation. This system is used in a variety of applications, including aerospace, automotive, electric power and chemical reactors.

  2. Hardware Accelerated Simulated Radiography

    SciTech Connect

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-04-12

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists.

  3. Accelerator simulation using computers

    SciTech Connect

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a multi-track'' simulation and analysis code can be used for these applications.

  4. Accelerator simulation using computers

    SciTech Connect

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a ``multi-track`` simulation and analysis code can be used for these applications.

  5. Particle acceleration in cosmic plasmas – paradigm change?

    SciTech Connect

    Lytikov, Maxim; Guo, Fan

    2015-07-21

    The presentation begins by considering the requirements on the acceleration mechanism. It is found that at least some particles in high-energy sources are accelerated by magnetic reconnection (and not by shocks). The two paradigms can be distinguished by the hardness of the spectra. Shocks typically produce spectra with p > 2 (relativistic shocks have p ~ 2.2); non-linear shocks & drift acceleration may give p < 2, e.g. p=1.5; B-field dissipation can give p = 1. Then collapse of stressed magnetic X-point in force-free plasma and collapse of a system of magnetic islands are taken up, including Island merger: forced reconnection. Spectra as functions of sigma are shown, and gamma ~ 109 is addressed. It is concluded that reconnection in magnetically-dominated plasma can proceed explosively, is an efficient means of particle acceleration, and is an important (perhaps dominant for some phenomena) mechanism of particle acceleration in high energy sources.

  6. Acceleration radioisotope production simulations

    SciTech Connect

    Waters, L.S.; Wilson, W.B.

    1996-12-31

    We have identified 96 radionuclides now being used or under consideration for use in medical applications. Previously, we calculated the production of {sup 99}Mo from enriched and depleted uranium targets at the 800-MeV energy used in the LAMPF accelerator at Los Alamos. We now consider the production of isotopes using lower energy beams, which may become available as a result of new high-intensity spallation target accelerators now being planned. The production of four radionuclides ({sup 7}Be, {sup 67}Cu, {sup 99}Mo, and {sup 195m}Pt) in a simplified proton accelerator target design is being examined. The LAHET, MCNP, and CINDER90 codes were used to model the target, transport a beam of protons and secondary produced particles through the system, and compute the nuclide production from spallation and low-energy neutron interactions. Beam energies of 200 and 400 MeV were used, and several targets were considered for each nuclide.

  7. Accelerator simulation of astrophysical processes

    NASA Technical Reports Server (NTRS)

    Tombrello, T. A.

    1983-01-01

    Phenomena that involve accelerated ions in stellar processes that can be simulated with laboratory accelerators are described. Stellar evolutionary phases, such as the CNO cycle, have been partially explored with accelerators, up to the consumption of He by alpha particle radiative capture reactions. Further experimentation is indicated on reactions featuring N-13(p,gamma)O-14, O-15(alpha, gamma)Ne-19, and O-14(alpha,p)F-17. Accelerated beams interacting with thin foils produce reaction products that permit a determination of possible elemental abundances in stellar objects. Additionally, isotopic ratios observed in chondrites can be duplicated with accelerator beam interactions and thus constraints can be set on the conditions producing the meteorites. Data from isotopic fractionation from sputtering, i.e., blasting surface atoms from a material using a low energy ion beam, leads to possible models for processes occurring in supernova explosions. Finally, molecules can be synthesized with accelerators and compared with spectroscopic observations of stellar winds.

  8. Hardware-Accelerated Simulated Radiography

    SciTech Connect

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-08-04

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester.

  9. Development of a neural net paradigm that predicts simulator sickness

    SciTech Connect

    Allgood, G.O.

    1993-03-01

    A disease exists that affects pilots and aircrew members who use Navy Operational Flight Training Systems. This malady, commonly referred to as simulator sickness and whose symptomatology closely aligns with that of motion sickness, can compromise the use of these systems because of a reduced utilization factor, negative transfer of training, and reduction in combat readiness. A report is submitted that develops an artificial neural network (ANN) and behavioral model that predicts the onset and level of simulator sickness in the pilots and aircrews who sue these systems. It is proposed that the paradigm could be implemented in real time as a biofeedback monitor to reduce the risk to users of these systems. The model captures the neurophysiological impact of use (human-machine interaction) by developing a structure that maps the associative and nonassociative behavioral patterns (learned expectations) and vestibular (otolith and semicircular canals of the inner ear) and tactile interaction, derived from system acceleration profiles, onto an abstract space that predicts simulator sickness for a given training flight.

  10. 20th Space Simulation Conference: The Changing Testing Paradigm

    NASA Technical Reports Server (NTRS)

    Stecher, Joseph L., III (Compiler)

    1999-01-01

    The Institute of Environmental Sciences and Technology's Twentieth Space Simulation Conference, "The Changing Testing Paradigm" provided participants with a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, atomic oxygen, program/system testing, dynamics testing, contamination, and materials. The papers presented at this conference and the resulting discussions carried out the conference theme "The Changing Testing Paradigm."

  11. 20th Space Simulation Conference: The Changing Testing Paradigm

    NASA Technical Reports Server (NTRS)

    Stecher, Joseph L., III (Compiler)

    1998-01-01

    The Institute of Environmental Sciences' Twentieth Space Simulation Conference, "The Changing Testing Paradigm" provided participants with a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, atomic oxygen, program/system testing, dynamics testing, contamination, and materials. The papers presented at this conference and the resulting discussions carried out the conference theme "The Changing Testing Paradigm."

  12. Accelerated dynamics simulations of nanotubes.

    SciTech Connect

    Uberuaga, B. P.; Stuart, S. J.; Voter, A. F.

    2002-01-01

    We report on the application of accelerated dynamics techniques to the study of carbon nanotubes. We have used the parallel replica method and temperature accelerated dynamics simulations are currently in progress. In the parallel replica study, we have stretched tubes at a rate significantly lower than that used in previous studies. In these preliminary results, we find that there are qualitative differences in the rupture of the nanotubes at different temperatures. We plan on extending this investigation to include nanotubes of various chiralities. We also plan on exploring unique geometries of nanotubes.

  13. AESS: Accelerated Exact Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  14. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  15. Community petascale project for accelerator science and simulation : Advancing computational science for future accelerators and accelerator technologies.

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L. C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.

    2008-01-01

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R & D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  16. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  17. Effects of Frequency and Motion Paradigm on Perception of Tilt and Translation During Periodic Linear Acceleration

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, Scott J.

    2009-01-01

    Previous studies have demonstrated an effect of frequency on the gain of tilt and translation perception. Results from different motion paradigms are often combined to extend the stimulus frequency range. For example, Off-Vertical Axis Rotation (OVAR) and Variable Radius Centrifugation (VRC) are useful to test low frequencies of linear acceleration at amplitudes that would require impractical sled lengths. The purpose of this study was to compare roll-tilt and lateral translation motion perception in 12 healthy subjects across four paradigms: OVAR, VRC, sled translation and rotation about an earth-horizontal axis. Subjects were oscillated in darkness at six frequencies from 0.01875 to 0.6 Hz (peak acceleration equivalent to 10 deg, less for sled motion below 0.15 Hz). Subjects verbally described the amplitude of perceived tilt and translation, and used a joystick to indicate the direction of motion. Consistent with previous reports, tilt perception gain decreased as a function of stimulus frequency in the motion paradigms without concordant canal tilt cues (OVAR, VRC and Sled). Translation perception gain was negligible at low stimulus frequencies and increased at higher frequencies. There were no significant differences between the phase of tilt and translation, nor did the phase significantly vary across stimulus frequency. There were differences in perception gain across the different paradigms. Paradigms that included actual tilt stimuli had the larger tilt gains, and paradigms that included actual translation stimuli had larger translation gains. In addition, the frequency at which there was a crossover of tilt and translation gains appeared to vary across motion paradigm between 0.15 and 0.3 Hz. Since the linear acceleration in the head lateral plane was equivalent across paradigms, differences in gain may be attributable to the presence of linear accelerations in orthogonal directions and/or cognitive aspects based on the expected motion paths.

  18. VORTEX PARADIGM FOR ACCELERATED INHOMOGENEOUS FLOWS: Visiometrics for the Rayleigh-Taylor and Richtmyer-Meshkov Environments

    NASA Astrophysics Data System (ADS)

    Zabusky, Norman J.

    1999-01-01

    We illustrate how cogent visiometrics can provide peak insights that lead to pathways for discovery through computer simulation. This process includes visualizing, quantifying, and tracking evolving coherent structure morphologies. We use the vortex paradigm (Hawley & Zabusky 1989) to guide, interpret, and model phenomena arising in numerical simulations of accelerated inhomogeneous flows, e.g. Richtmyer-Meshkov shock-interface and shock-bubble environments and Rayleigh-Taylor environments. Much of this work is available on the Internet at the sites of my collaborators, A Kotelnikov, J Ray, and R Samtaney, at our Vizlab URL, http://vizlab.rutgers.edu/vizlab.html.

  19. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  20. EMPaSE: an Extensible Multi-Paradigm Simulation Environment

    Energy Science and Technology Software Center (ESTSC)

    2010-08-05

    EMPaSE is a hierarchical, extensible, modular modeling environment for developing and running hybrid simulations of sequential-modular, systems dynamics, discrete-event, and agent-based paradigms. It contains two principle components: a multi-paradigm simulation engine and a graphical user interface. EMPaSE models are defined through a hierarchically-defined set of computational modules that define the simulation logic. Inter-module communication occurs through two complimentary systems: pull-based "ports" for general computation patterns and push-based "plugs" for event processing. Entities (i.e. agents) withinmore » the simulation operate within an abstract multi-network environment. The EMPaSE simulation engine is designed around a flexible plug-in architecture, allowing simulations to import computational modules, engine customizations, and interfaces to external applications from independent plug-in libraries. The EMPaSE GUI environment provides an environment for graphically constructing, executing, and debugging EMPaSE models. As with the simulation engine, the GUI is constructed on top of an extensible architecture that supports rapid customization of the user experience through external plug-in libraries.« less

  1. EMPaSE: an Extensible Multi-Paradigm Simulation Environment

    SciTech Connect

    Siirola, John; Spotz, William; & Warrender, Christina

    2010-08-05

    EMPaSE is a hierarchical, extensible, modular modeling environment for developing and running hybrid simulations of sequential-modular, systems dynamics, discrete-event, and agent-based paradigms. It contains two principle components: a multi-paradigm simulation engine and a graphical user interface. EMPaSE models are defined through a hierarchically-defined set of computational modules that define the simulation logic. Inter-module communication occurs through two complimentary systems: pull-based "ports" for general computation patterns and push-based "plugs" for event processing. Entities (i.e. agents) within the simulation operate within an abstract multi-network environment. The EMPaSE simulation engine is designed around a flexible plug-in architecture, allowing simulations to import computational modules, engine customizations, and interfaces to external applications from independent plug-in libraries. The EMPaSE GUI environment provides an environment for graphically constructing, executing, and debugging EMPaSE models. As with the simulation engine, the GUI is constructed on top of an extensible architecture that supports rapid customization of the user experience through external plug-in libraries.

  2. Transient simulation of ram accelerator flowfields

    NASA Astrophysics Data System (ADS)

    Drabczuk, Randall P.; Rolader, G.; Dash, S.; Sinha, N.; York, B.

    1993-01-01

    This paper describes the development of an advanced computational fluid dynamic (CFD) simulation capability in support of the USAF Armament Directorate ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, and high pressure thermo-chemistry. Selected ram accelerator simulations are presented that serve to exhibit the CRAFT code capabilities and identify some of the principle research/design Issues.

  3. Transient simulation of ram accelerator flowfields

    NASA Astrophysics Data System (ADS)

    Sinha, N.; York, B. J.; Dash, S. M.; Drabczuk, R.; Rolader, G. E.

    1992-10-01

    This paper describes the development of an advanced computational fluid dynamic (CFD) simulation capability in support of the U.S. Air Force Armament Directorate's ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, high pressure thermochemistry, etc. Selected ram accelerator simulations are presented which serve to exhibit the CRAFT code's capabilities and identify some of the principal research/design issues.

  4. Enabling Technologies for Petascale Electromagnetic Accelerator Simulation

    SciTech Connect

    Lee, Lie-Quan; Akcelik, Volkan; Chen, Sheng; Ge, Li-Xin; Prudencio, Ernesto; Schussman, Greg; Uplenchwar, Ravi; Ng, Cho; Ko, Kwok; Luo, Xiaojun; Shephard, Mark; /Rensselaer Poly.

    2007-11-09

    The SciDAC2 accelerator project at SLAC aims to simulate an entire three-cryomodule radio frequency (RF) unit of the International Linear Collider (ILC) main Linac. Petascale computing resources supported by advances in Applied Mathematics (AM) and Computer Science (CS) and INCITE Program are essential to enable such very large-scale electromagnetic accelerator simulations required by the ILC Global Design Effort. This poster presents the recent advances and achievements in the areas of CS/AM through collaborations.

  5. Computation applied to particle accelerator simulations

    SciTech Connect

    Herrmannsfeldt, W.B. ); Yan, Y.T. )

    1991-07-01

    The rapid growth in the power of large-scale computers has had a revolutionary effect on the study of charged-particle accelerators that is similar to the impact of smaller computers on everyday life. Before an accelerator is built, it is now the absolute rule to simulate every component and subsystem by computer to establish modes of operation and tolerances. We will bypass the important and fruitful areas of control and operation and consider only application to design and diagnostic interpretation. Applications of computers can be divided into separate categories including: component design, system design, stability studies, cost optimization, and operating condition simulation. For the purposes of this report, we will choose a few examples taken from the above categories to illustrate the methods and we will discuss the significance of the work to the project, and also briefly discuss the accelerator project itself. The examples that will be discussed are: (1) the tracking analysis done for the main ring of the Superconducting Supercollider, which contributed to the analysis which ultimately resulted in changing the dipole coil diameter to 5 cm from the earlier design for a 4-cm coil-diameter dipole magnet; (2) the design of accelerator structures for electron-positron linear colliders and circular colliding beam systems (B-factories); (3) simulation of the wake fields from multibunch electron beams for linear colliders; and (4) particle-in-cell simulation of space-charge dominated beams for an experimental liner induction accelerator for Heavy Ion Fusion. 8 refs., 9 figs.

  6. NUMERICAL SIMULATIONS OF SPICULE ACCELERATION

    SciTech Connect

    Guerreiro, N.; Carlsson, M.; Hansteen, V. E-mail: mats.carlsson@astro.uio.no

    2013-04-01

    Observations in the H{alpha} line of hydrogen and the H and K lines of singly ionized calcium on the solar limb reveal the existence of structures with jet-like behavior, usually designated as spicules. The driving mechanism for such structures remains poorly understood. Sterling et al. shed some light on the problem mimicking reconnection events in the chromosphere with a one-dimensional code by injecting energy with different spatial and temporal distributions and tracing the thermodynamic evolution of the upper chromospheric plasma. They found three different classes of jets resulting from these injections. We follow their approach but improve the physical description by including non-LTE cooling in strong spectral lines and non-equilibrium hydrogen ionization. Increased cooling and conversion of injected energy into hydrogen ionization energy instead of thermal energy both lead to weaker jets and smaller final extent of the spicules compared with Sterling et al. In our simulations we find different behavior depending on the timescale for hydrogen ionization/recombination. Radiation-driven ionization fronts also form.

  7. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  8. Kinetic Simulations of Particle Acceleration at Shocks

    SciTech Connect

    Caprioli, Damiano; Guo, Fan

    2015-07-16

    Collisionless shocks are mediated by collective electromagnetic interactions and are sources of non-thermal particles and emission. The full particle-in-cell approach and a hybrid approach are sketched, simulations of collisionless shocks are shown using a multicolor presentation. Results for SN 1006, a case involving ion acceleration and B field amplification where the shock is parallel, are shown. Electron acceleration takes place in planetary bow shocks and galaxy clusters. It is concluded that acceleration at shocks can be efficient: >15%; CRs amplify B field via streaming instability; ion DSA is efficient at parallel, strong shocks; ions are injected via reflection and shock drift acceleration; and electron DSA is efficient at oblique shocks.

  9. Accelerated Aging of the M119 Simulator

    NASA Technical Reports Server (NTRS)

    Bixon, Eric R.

    2000-01-01

    This paper addresses the storage requirement, shelf life, and the reliability of M119 Whistling Simulator. Experimental conditions have been determined and the data analysis has been completed for the accelerated testing of the system. A general methodology to evaluate the shelf life of the system as a function of the storage time, temperature, and relative humidity is discussed.

  10. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  11. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  12. DIFFUSIVE SHOCK ACCELERATION SIMULATIONS OF RADIO RELICS

    SciTech Connect

    Kang, Hyesung; Ryu, Dongsu; Jones, T. W. E-mail: ryu@canopus.cnu.ac.kr

    2012-09-01

    Recent radio observations have identified a class of structures, so-called radio relics, in clusters of galaxies. The radio emission from these sources is interpreted as synchrotron radiation from GeV electrons gyrating in {mu}G-level magnetic fields. Radio relics, located mostly in the outskirts of clusters, seem to associate with shock waves, especially those developed during mergers. In fact, they seem to be good structures to identify and probe such shocks in intracluster media (ICMs), provided we understand the electron acceleration and re-acceleration at those shocks. In this paper, we describe time-dependent simulations for diffusive shock acceleration at weak shocks that are expected to be found in ICMs. Freshly injected as well as pre-existing populations of cosmic-ray (CR) electrons are considered, and energy losses via synchrotron and inverse Compton are included. We then compare the synchrotron flux and spectral distributions estimated from the simulations with those in two well-observed radio relics in CIZA J2242.8+5301 and ZwCl0008.8+5215. Considering that CR electron injection is expected to be rather inefficient at weak shocks with Mach number M {approx}< a few, the existence of radio relics could indicate the pre-existing population of low-energy CR electrons in ICMs. The implication of our results on the merger shock scenario of radio relics is discussed.

  13. Numerical and laboratory simulations of auroral acceleration

    SciTech Connect

    Gunell, H.; De Keyser, J.; Mann, I.

    2013-10-15

    The existence of parallel electric fields is an essential ingredient of auroral physics, leading to the acceleration of particles that give rise to the auroral displays. An auroral flux tube is modelled using electrostatic Vlasov simulations, and the results are compared to simulations of a proposed laboratory device that is meant for studies of the plasma physical processes that occur on auroral field lines. The hot magnetospheric plasma is represented by a gas discharge plasma source in the laboratory device, and the cold plasma mimicking the ionospheric plasma is generated by a Q-machine source. In both systems, double layers form with plasma density gradients concentrated on their high potential sides. The systems differ regarding the properties of ion acoustic waves that are heavily damped in the magnetosphere, where the ion population is hot, but weakly damped in the laboratory, where the discharge ions are cold. Ion waves are excited by the ion beam that is created by acceleration in the double layer in both systems. The efficiency of this beam-plasma interaction depends on the acceleration voltage. For voltages where the interaction is less efficient, the laboratory experiment is more space-like.

  14. An exact accelerated stochastic simulation algorithm

    PubMed Central

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432

  15. The numerical simulation of accelerator components

    SciTech Connect

    Herrmannsfeldt, W.B.; Hanerfeld, H.

    1987-05-01

    The techniques of the numerical simulation of plasmas can be readily applied to problems in accelerator physics. Because the problems usually involve a single component ''plasma,'' and times that are at most, a few plasma oscillation periods, it is frequently possible to make very good simulations with relatively modest computation resources. We will discuss the methods and illustrate them with several examples. One of the more powerful techniques of understanding the motion of charged particles is to view computer-generated motion pictures. We will show several little movie strips to illustrate the discussions. The examples will be drawn from the application areas of Heavy Ion Fusion, electron-positron linear colliders and injectors for free-electron lasers. 13 refs., 10 figs., 2 tabs.

  16. SIMULATING ACCELERATOR STRUCTURE OPERATION AT HIGH POWER

    SciTech Connect

    Ivanov, V

    2004-09-15

    The important limiting factors in high-gradient accelerator structure operation are dark current capture, RF breakdown and electron multipacting. These processes involve both primary and secondary electron field emission and produce plasma and X-rays. To better understand these phenomena, they have simulated dark current generation and transport in a linac structure and a square-bend waveguide, both high power tested at SLAC. For these simulations, they use the parallel, time-domain, unstructured-grid code Tau3P and the particle tracking module Track3P. In this paper, they present numerical results and their comparison with measurements on energy spectrum of electrons transmitted in a 30-cell structure and of X-rays emitted from the square-bend waveguide.

  17. Toward GPGPU accelerated human electromechanical cardiac simulations

    PubMed Central

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492

  18. A hierarchical exact accelerated stochastic simulation algorithm

    PubMed Central

    Orendorff, David; Mjolsness, Eric

    2012-01-01

    A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214

  19. Translational Vestibulo-Ocular Reflex and Motion Perception During Interaural Linear Acceleration: Comparison of Different Motion Paradigms

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.

    2011-01-01

    The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for

  20. Geospace simulations using modern accelerator processor technology

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.

    2009-12-01

    OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.

  1. Lunar Dust Simulant in Mechanical Component Testing - Paradigm and Practicality

    NASA Technical Reports Server (NTRS)

    Jett, T.; Street, K.; Abel, P.; Richmond, R.

    2008-01-01

    Due to the uniquely harsh lunar surface environment, terrestrial test activities may not adequately represent abrasive wear by lunar dust likely to be experienced in mechanical systems used in lunar exploration. Testing to identify potential moving mechanism problems has recently begun within the NASA Engineering and Safety Center Mechanical Systems Lunar Dust Assessment activity in coordination with the Exploration Technology and Development Program Dust Management Project, and these complimentary efforts will be described. Specific concerns about differences between simulant and lunar dust, and procedures for mechanical component testing with lunar simulant will be considered. In preparing for long term operations within a dusty lunar environment, the three fundamental approaches to keeping mechanical equipment functioning are dust avoidance, dust removal, and dust tolerance, with some combination of the three likely to be found in most engineering designs. Methods to exclude dust from contact with mechanical components would constitute mitigation by dust avoidance, so testing seals for dust exclusion efficacy as a function of particle size provides useful information for mechanism design. Dust of particle size less than a micron is not well documented for impact on lunar mechanical components. Therefore, creating a standardized lunar dust simulant in the particulate size range of ca. 0.1 to 1.0 micrometer is useful for testing effects on mechanical components such as bearings, gears, seals, bushings, and other moving mechanical assemblies. Approaching actual wear testing of mechanical components, it is beneficial to first establish relative wear rates caused by dust on commonly used mechanical component materials. The wear mode due to dust within mechanical components, such as abrasion caused by dust in grease(s), needs to be considered, as well as the effects of vacuum, lunar thermal cycle, and electrostatics on wear rate.

  2. Particle Simulations of a Linear Dielectric Wall Proton Accelerator

    SciTech Connect

    Poole, B R; Blackfield, D T; Nelson, S D

    2007-06-12

    The dielectric wall accelerator (DWA) is a compact induction accelerator structure that incorporates the accelerating mechanism, pulse forming structure, and switch structure into an integrated module. The DWA consists of stacked stripline Blumlein assemblies, which can provide accelerating gradients in excess of 100 MeV/meter. Blumleins are switched sequentially according to a prescribed acceleration schedule to maintain synchronism with the proton bunch as it accelerates. A finite difference time domain code (FDTD) is used to determine the applied acceleration field to the proton bunch. Particle simulations are used to model the injector as well as the accelerator stack to determine the proton bunch energy distribution, both longitudinal and transverse dynamic focusing, and emittance growth associated with various DWA configurations.

  3. Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators

    SciTech Connect

    Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.

    1999-11-13

    In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design.

  4. MHD Simulations of Thermal Plasma Jets in Coaxial Plasma Accelerators

    NASA Astrophysics Data System (ADS)

    Subramaniam, Vivek; Raja, Laxminarayan

    2015-09-01

    The development of a magneto-hydrodynamics (MHD) numerical tool to study high energy density thermal plasma in coaxial plasma accelerators is presented. The coaxial plasma accelerator is a device used simulate the conditions created at the confining wall of a thermonuclear fusion reactor during an edge localized mode (ELM) disruption event. This is achieved by creating magnetized thermal plasma in a coaxial volume which is then accelerated by the Lorentz force to form a high velocity plasma jet. The simulation tool developed solves the resistive MHD equation using a finite volume method (FVM) framework. The acceleration and subsequent demagnetization of the plasma as it travels down the length of the accelerator is simulated and shows good agreement with experiments. Additionally, a model to study the thermalization of the plasma at the inlet is being developed in order to give self-consistent initial conditions to the MHD solver.

  5. A linear accelerator for simulated micrometeors.

    NASA Technical Reports Server (NTRS)

    Slattery, J. C.; Becker, D. G.; Hamermesh, B.; Roy, N. L.

    1973-01-01

    Review of the theory, design parameters, and construction details of a linear accelerator designed to impart meteoric velocities to charged microparticles in the 1- to 10-micron diameter range. The described linac is of the Sloan Lawrence type and, in a significant departure from conventional accelerator practice, is adapted to single particle operation by employing a square wave driving voltage with the frequency automatically adjusted from 12.5 to 125 kHz according to the variable velocity of each injected particle. Any output velocity up to about 30 km/sec can easily be selected, with a repetition rate of approximately two particles per minute.

  6. Simulations of a meter-long plasma wakefield accelerator

    SciTech Connect

    Lee, S.; Katsouleas, T.; Hemker, R.; Mori, W. B.

    2000-06-01

    Full-scale particle-in-cell simulations of a meter-long plasma wakefield accelerator (PWFA) are presented in two dimensions. The results support the design of a current PWFA experiment in the nonlinear blowout regime where analytic solutions are intractable. A relativistic electron bunch excites a plasma wake that accelerates trailing particles at rates of several hundred MeV/m. A comparison is made of various simulation codes, and a parallel object-oriented code OSIRIS is used to model a full meter of acceleration. Excellent agreement is obtained between the simulations and analytic expressions for the transverse betatron oscillations of the beam. The simulations are used to develop scaling laws for designing future multi-GeV accelerator experiments. (c) 2000 The American Physical Society.

  7. Simulation of the LAMPF linear accelerator

    SciTech Connect

    Garnett, R.W.; Gray, E.R.; Wangler, T.P.

    1994-12-31

    The results of an extensive study to simulate the performance of the LAMPF Iinac are discussed. Transport of both the H{sup +} and the H{sup {minus}} beams, starting in the low-energy beam transport sections at 750 keV and continuing through the drift-tube linac and the side-coupled linac up to 800 MeV have been simulated. The simulation results are compared with the measured beam emittance and profile data.

  8. Numerical simulations of reactive flows in ram accelerators

    NASA Astrophysics Data System (ADS)

    Li, C.; Landsberg, A. M.; Kailasanath, K.; Oran, E. S.; Boris, J. P.

    1992-10-01

    Reactive flows around accelerating projectiles in ram accelerators are numerically simulated using a newly developed code for time-dependent flows in noninertial frames. Two different modes of operations, the thermally choked mode and the superdetonative mode have been investigated. The simulations show that, in both modes, a significant acceleration (up to 10(exp 5) g) can be achieved with projectiles of different shapes in various hydrogen-oxygen-nitrogen mixtures. However, the flow field is highly transient and the thrust on the projectile is unsteady. In the thermally choked mode, the unsteadiness is caused by the rapid acceleration of the projectile and large-scale, vortical flow structures generated in or near the recirculation region behind the projectile. In the superdetonative mode, the unsteadiness is mainly caused by the accelerating projectile.

  9. Accelerated growth of calcium silicate hydrates: Experiments and simulations

    SciTech Connect

    Nicoleau, Luc

    2011-12-15

    Despite the usefulness of isothermal calorimetry in cement analytics, without any further computations this brings only little information on the nucleation and growth of hydrates. A model originally developed by Garrault et al. is used in this study in order to simulate hydration curves of cement obtained by calorimetry with different known hardening accelerators. The limited basis set of parameters used in this model, having a physical or chemical significance, is valuable for a better understanding of mechanisms underlying in the acceleration of C-S-H precipitation. Alite hydration in presence of four different types of hardening accelerators was investigated. It is evidenced that each accelerator type plays a specific role on one or several growth parameters and that the model may support the development of new accelerators. Those simulations supported by experimental observations enable us to follow the formation of the C-S-H layer around grains and to extract interesting information on its apparent permeability.

  10. Accelerating the paradigm shift toward inclusion of pregnant women in drug research: Ethical and regulatory considerations.

    PubMed

    White, Amina

    2015-11-01

    Although there has been long-standing reluctance to include pregnant women as clinical trial participants, increasing recognition of profound gaps in research on the safety and efficacy of drugs often prescribed to pregnant women calls into question the practice of routinely excluding them. This article presents compelling reasons for including pregnant women in clinical research, highlights certain regulatory barriers to the inclusion of pregnant women, and proposes that professional societies with expertise in obstetrics and maternal-fetal medicine can be instrumental in hastening the paradigm shift from the systematic exclusion of pregnant women in research to a one of responsible and fair inclusion. PMID:26385413

  11. Using FPGA Devices to Accelerate Biomolecular Simulations

    SciTech Connect

    Alam, Sadaf R; Agarwal, Pratul K; Smith, Melissa C; Vetter, Jeffrey S; Caliga, David E

    2007-03-01

    A field-programmable gate array implementation of the particle-mesh Ewald a molecular dynamics simulation method reduces the microprocessor time-to-solution by a factor of three while using only high-level languages. The application speedup on FPGA devices increases with the problem size. The authors use a performance model to analyze the potential of simulating large-scale biological systems faster than many cluster-based supercomputing platforms.

  12. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  13. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  14. Transverse effects in plasma wakefield acceleration at FACET - Simulation studies

    SciTech Connect

    Adli, E.; Hogan, M.; Frederico, J.; Litos, M. D.; An, W.; Mori, W.

    2012-12-21

    We investigate transverse effects in the plasma-wakefield acceleration experiments planned and ongoing at FACET. We use PIC simulation tools, mainly QuickPIC, to simulate the interaction of the drive electron beam and the plasma. In FACET a number of beam dynamics knobs, including dispersion and bunch length knobs, can be used to vary the beam transverse characteristics in the plasma. We present simulation results and the status of the FACET experimental searches.

  15. Simulation of electron post-acceleration in a two-stage laser Wakefield accelerator

    SciTech Connect

    Reitsma, A.J.W.; Leemans, W.P.; Esarey, E.; Kamp, L.P.J.; Schep, T.J.

    2002-04-01

    Electron bunches produced in self-modulated laser wakefield experiments usually have a broad energy spectrum, with most electrons at low energy (1-3 MeV) and only a small fraction at high energy. We propose and investigate further acceleration of such bunches in a channel-guided resonant laser wakefield accelerator. Two-dimensional simulations with and without the effects of self-consistent beam loading are performed and compared. These results indicate that it is possible to trap about 40 percent of the injected bunch charge and accelerate this fraction to an average energy of about 50 MeV in a plasma channel of a few mn.

  16. Recirculating Linac Acceleration - End-to-End Simulation

    SciTech Connect

    Alex Bogacz

    2010-03-01

    A conceptual design of a high-pass-number Recirculating Linear Accelerator (RLA) for muons is presented. The scheme involves three superconducting linacs (201 MHz): a single pass linear Pre-accelerator followed by a pair multi-pass (4.5-pass) 'Dogbone' RLAs. Acceleration starts after ionization cooling at 220 MeV/c and proceeds to 12.6 GeV. The Pre-accelerator captures a large muon phase space and accelerates muons to relativistic energies, while adiabatically decreasing the phase-space volume, so that effective acceleration in the RLA is possible. The RLA further compresses and shapes up the longitudinal and transverse phase-spaces, while increasing the energy. Appropriate choice of multi-pass linac optics based on FODO focusing assures large number of passes in the RLA. The proposed 'Dogbone' configuration facilitates simultaneous acceleration of both mu± species through the requirement of mirror symmetric optics of the return 'droplet' arcs. Finally, presented end-to-end simulation validates the efficiency and acceptance of the accelerator system.

  17. Accelerating Subsurface Transport Simulation on Heterogeneous Clusters

    SciTech Connect

    Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino

    2013-09-23

    Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy, LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272

  18. Multiple processor accelerator for logic simulation

    SciTech Connect

    Catlin, G.M.

    1989-10-17

    This patent describes a computer system coupled to a plurality of users for implementing an event driven algorithm of each of the users. It comprises: a master processor coupled to the users for providing overall control of the computer system and executing the event driven algorithm of each of the users, the master processor further including a master memory; a unidirectional ring bus coupled to the master processor; a plurality of processor modules; an interprocessor bus coupled to the plurality of processors within the module for transferring the simulation data among the processors; and an interface means.

  19. Introducing a new paradigm for accelerators and large experimental apparatus control systems

    NASA Astrophysics Data System (ADS)

    Catani, L.; Zani, F.; Bisegni, C.; Di Pirro, G.; Foggetta, L.; Mazzitelli, G.; Stecchi, A.

    2012-11-01

    The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing distributed control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-ons and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. Despite this still narrow spectrum of employment, some software technologies developed for high-performance web services, although originally intended and optimized for these particular applications, deserve some features suggesting a deeper integration in a control system and, eventually, their use to develop some of the control system’s core components. In this paper, we present the conceptual design of a new control system for a particle accelerator and associated machine data acquisition system, based on a synergic combination of a nonrelational key/value database and network distributed object caching. The use of these technologies, to implement respectively continuous data archiving and data distribution between components, brought about the definition of a new control system concept offering a number of interesting features such as a high level of abstraction of services and components and their integration in a framework that can be seen as a comprehensive service provider that both graphical user interface applications and front-end controllers join for accessing and, to some extent, expanding its functionalities.

  20. SCALED SIMULATION DESIGN OF HIGH QUALITY LASER WAKEFIELD ACCELERATOR STAGES

    SciTech Connect

    Geddes, C.G.R.; Cormier-Michel, E.; Esarey, E.; Schroeder, C.B.; Leemans, W.P.; Bruhwiler, D.L.; Cowan, B.; Nieter, C.; Paul, K.; Cary, J.R.

    2009-05-04

    Design of efficient, high gradient laser driven wakefield accelerator (LWFA) stages using explicit particle-incell simulations with physical parameters scaled by plasma density is presented. LWFAs produce few percent energy spread electron bunches at 0.1-1 GeV with high accelerating gradients. Design tools are now required to predict and improve performance and efficiency of future LWFA stages. Scaling physical parameters extends the reach of explicit simulations to address applications including 10 GeV stages and stages for radiation sources, and accurately resolves deep laser depletion to evaluate efficient stages.

  1. Measuring listening effort: driving simulator vs. simple dual-task paradigm

    PubMed Central

    Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth

    2014-01-01

    Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced

  2. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  3. GPU-accelerated micromagnetic simulations using cloud computing

    NASA Astrophysics Data System (ADS)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  4. Scaled simulations of a 10 GeV accelerator

    SciTech Connect

    Cormier-Michel, Estelle; Geddes, C. G. R.; Schroeder, C. B.; Esarey, E.; Leemans, W. P.; Bruhwiler, D. L.; Paul, K.; Cowan, B.

    2009-01-22

    Laser plasma accelerators are able to produce high quality electron beams from 1 MeV to 1 GeV. The next generation of plasma accelerator experiments will likely use a multi-stage approach where a high quality electron bunch is first produced and then injected into an accelerating structure. In this paper we present scaled particle-in-cell simulations of a 10 GeV stage in the quasi-linear regime. We show that physical parameters can be scaled to be able to perform these simulations at reasonable computational cost. Beam loading properties and electron bunch energy gain are calculated. A range of parameter regimes are studied to optimize the quality of the electron bunch at the output of the stage.

  5. Scaled simulations of a 10 GeV accelerator

    SciTech Connect

    Cormier-Michel, Estelle; Geddes, C.G.R; Esarey, E.; Schroeder, C.B.; Bruhwiler, D.L.; Paul, K.; Cowan, B.; Leemans, W.P.

    2008-09-08

    Laser plasma accelerators are able to produce high quality electron beams from 1 MeV to 1 GeV. The next generation of plasma accelerator experiments will likely use a multi-stage approach where a high quality electron bunch is first produced and then injected into an accelerating structure. In this paper we present scaled particle-in-cell simulations of a 10 GeV stage in the quasi-linear regime. We show that physical parameters can be scaled to be able to perform these simulations at reasonable computational cost. Beam loading properties and electron bunch energy gain are calculated. A range of parameter regimes are studied to optimize the quality of the electron bunch at the output of the stage.

  6. Accelerated Vascular Aging as a Paradigm for Hypertensive Vascular Disease: Prevention and Therapy.

    PubMed

    Barton, Matthias; Husmann, Marc; Meyer, Matthias R

    2016-05-01

    Aging is considered the most important nonmodifiable risk factor for cardiovascular disease and death after age 28 years. Because of demographic changes the world population is expected to increase to 9 billion by the year 2050 and up to 12 billion by 2100, with several-fold increases among those 65 years of age and older. Healthy aging and prevention of aging-related diseases and associated health costs have become part of political agendas of governments around the world. Atherosclerotic vascular burden increases with age; accordingly, patients with progeria (premature aging) syndromes die from myocardial infarctions or stroke as teenagers or young adults. The incidence and prevalence of arterial hypertension also increases with age. Arterial hypertension-like diabetes and chronic renal failure-shares numerous pathologies and underlying mechanisms with the vascular aging process. In this article, we review how arterial hypertension resembles premature vascular aging, including the mechanisms by which arterial hypertension (as well as other risk factors such as diabetes mellitus, dyslipidemia, or chronic renal failure) accelerates the vascular aging process. We will also address the importance of cardiovascular risk factor control-including antihypertensive therapy-as a powerful intervention to interfere with premature vascular aging to reduce the age-associated prevalence of diseases such as myocardial infarction, heart failure, hypertensive nephropathy, and vascular dementia due to cerebrovascular disease. Finally, we will discuss the implementation of endothelial therapy, which aims at active patient participation to improve primary and secondary prevention of cardiovascular disease. PMID:27118295

  7. A SW Simulator Paradigm For Spaceborn GMTI Performance Analysis In Sea Clutter

    NASA Astrophysics Data System (ADS)

    Maffei, Marco; Venturini, Roberto

    2013-12-01

    Modern system engineering for Spaceborne Radars (SBRs) relies on a rigorous mathematical analysis and related simulation software (SW) tools as an aid to radar performance prediction as well as to support breadboarding activities for novel payloads. This paper outlines the design paradigm of a SW Simulator for Spaceborne Ground Moving Target Indicator (GMTI) Performace Analysis in Sea Clutter complying to standard policies of system design and development based on Flexibility, Modularity, Interoperability, and Efficiency. Clearly the Efficacy relies on the core engineering issue which has not been faced completely by the scientific and technical community in terms of enabling technologies for SBRs, the thorough applicability of SBR-GMTI techniques to the marine environment in harsh environmental conditions, as well as sea clutter modeling.

  8. Beam Dynamics Design and Simulation in Ion Linear Accelerators (

    Energy Science and Technology Software Center (ESTSC)

    2006-08-01

    Orginally, the ray tracing code TRACK has been developed to fulfill the many special requirements for the Rare Isotope Accelerator Facility known as RIA. Since no available beam-dynamics code met all the necessary requirements, modifications to the code TRACK were introduced to allow end-to-end (from the ion souce to the production target) simulations of the RIA machine, TRACK is a general beam-dynamics code and can be applied for the design, commissioning and operation of modernmore » ion linear accelerators and beam transport systems.« less

  9. Beam Dynamics Design and Simulation in Ion Linear Accelerators (

    SciTech Connect

    Ostroumov, Peter N.; Asseev, Vladislav N.; Mustapha, and Brahim

    2006-08-01

    Orginally, the ray tracing code TRACK has been developed to fulfill the many special requirements for the Rare Isotope Accelerator Facility known as RIA. Since no available beam-dynamics code met all the necessary requirements, modifications to the code TRACK were introduced to allow end-to-end (from the ion souce to the production target) simulations of the RIA machine, TRACK is a general beam-dynamics code and can be applied for the design, commissioning and operation of modern ion linear accelerators and beam transport systems.

  10. Monte Carlo simulation of particle acceleration at astrophysical shocks

    NASA Technical Reports Server (NTRS)

    Campbell, Roy K.

    1989-01-01

    A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better

  11. Monte Carlo simulation of particle acceleration at astrophysical shocks

    NASA Astrophysics Data System (ADS)

    Campbell, Roy K.

    1989-09-01

    A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better

  12. Simulations of ion acceleration at non-relativistic shocks. I. Acceleration efficiency

    SciTech Connect

    Caprioli, D.; Spitkovsky, A.

    2014-03-10

    We use two-dimensional and three-dimensional hybrid (kinetic ions-fluid electrons) simulations to investigate particle acceleration and magnetic field amplification at non-relativistic astrophysical shocks. We show that diffusive shock acceleration operates for quasi-parallel configurations (i.e., when the background magnetic field is almost aligned with the shock normal) and, for large sonic and Alfvénic Mach numbers, produces universal power-law spectra ∝p {sup –4}, where p is the particle momentum. The maximum energy of accelerated ions increases with time, and it is only limited by finite box size and run time. Acceleration is mainly efficient for parallel and quasi-parallel strong shocks, where 10%-20% of the bulk kinetic energy can be converted to energetic particles and becomes ineffective for quasi-perpendicular shocks. Also, the generation of magnetic turbulence correlates with efficient ion acceleration and vanishes for quasi-perpendicular configurations. At very oblique shocks, ions can be accelerated via shock drift acceleration, but they only gain a factor of a few in momentum and their maximum energy does not increase with time. These findings are consistent with the degree of polarization and the morphology of the radio and X-ray synchrotron emission observed, for instance, in the remnant of SN 1006. We also discuss the transition from thermal to non-thermal particles in the ion spectrum (supra-thermal region) and we identify two dynamical signatures peculiar of efficient particle acceleration, namely, the formation of an upstream precursor and the alteration of standard shock jump conditions.

  13. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations. PMID:26406070

  14. A new paradigm for variable-fidelity stochastic simulation and information fusion in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Venturi, Daniele; Parussini, Lucia; Perdikaris, Paris; Karniadakis, George

    2015-11-01

    Predicting the statistical properties of fluid systems based on stochastic simulations and experimental data is a problem of major interest across many disciplines. Even with recent theoretical and computational advancements, no broadly applicable techniques exist that could deal effectively with uncertainty propagation and model inadequacy in high-dimensions. To address these problems, we propose a new paradigm for variable-fidelity stochastic modeling, simulation and information fusion in fluid mechanics. The key idea relies in employing recursive Bayesian networks and multi-fidelity information sources (e.g., stochastic simulations at different resolution) to construct optimal predictors for quantities of interest, e.g., the random temperature field in stochastic Rayleigh-Bénard convection. The object of inference is the quantity of interest at the highest possible level of fidelity, for which we can usually afford only few simulations. To compute the optimal predictors, we developed a multivariate recursive co-kriging approach that simultaneously takes into account variable fidelity in the space of models (e.g., DNS vs. potential flow solvers), as well as variable-fidelity in probability space. Numerical applications are presented and discussed. This research was supported by AFOSR and DARPA.

  15. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc

    2009-01-22

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.

  16. Simulating An Acceleration Schedule For NDCX-II

    SciTech Connect

    Sharp, W M; Friedman, A; Grote, D P; Henestroza, E; Leitner, M A; Waldron, W L

    2009-05-18

    The Virtual National Laboratory for Heavy-Ion Fusion Science is developing a physics design for NDCX-II, an experiment to study warm dense matter heated by ions. Present plans call for using 34 induction cells to accelerate 45 nC of Li{sup +} ions to more than 3 MeV, followed by neutralized drift-compression. To heat targets to the desired temperatures, the beam must be compressed to a millimeter-scale radius and a duration of about 1 ns. A novel NDCX-II acceleration schedule has been developed using an interactive one-dimensional particle-in-cell simulation ASP to model the longitudinal physics and axisymmetric WARP simulations to validate the 1-D model and add transverse focusing. Three-dimensional Warp runs have been used recently to study the sensitivity to misalignments in the focusing solenoids.

  17. SIMULATING AN ACCELERATION SCHEDULE FOR NDCX-II

    SciTech Connect

    Sharp, W.M.; Friedman, A.; Grote, D.P.; Henestroza, E.; Leitner, M.A.; Waldron, W.L.

    2009-05-01

    The Virtual National Laboratory for Heavy-Ion Fusion Science is developing a physics design for NDCX-II, an experiment to study warm dense matter heated by ions. Present plans call for using 34 induction cells to accelerate 45 nC of Li+ ions to more than 3 MeV, followed by neutralized drift-compression. To heat targets to the desired temperatures, the beam must be compressed to a millimeter-scale radius and a duration of about 1 ns. A novel NDCX-II acceleration schedule has been developed using an interactive one-dimensional particle-in-cell simulation ASP to model the longitudinal physics and axisymmetric WARP simulations to validate the 1-D model and add transverse focusing. Three-dimensional Warp runs have been used recently to study the sensitivity to misalignments in the focusing solenoids.

  18. Simulation of cardiovascular response to acceleration stress following weightless exposure

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Leonard, J. I.

    1983-01-01

    Physiological adjustments taking place during space flight tend to reduce the tolerance of the crew to headward (+Gz) acceleration experienced during the reentry phase of the flight. This reduced tolerance to acceleration stress apparently arises from an adaptation to the microgravity environment of space, including a decrease in the total circulating blood volume. Countermeasures such as anti-g garments have long been known to improve the tolerance to headward g-force, but their effectiveness in space flight has not been fully evaluated. The simulation study presented in this paper is concerned with the response of the cardiovascular system to g-stress following cardiovascular deconditioning, resulting from exposure to weightlessness, or any of its ground-based experimental analogs. The results serve to demonstrate the utility of mathematical modeling and computer simulation for studying the causes of orthostatic intolerance and the remedial measures to lessen it.

  19. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-09-10

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.

  20. The common component architecture for particle accelerator simulations.

    SciTech Connect

    Dechow, D. R.; Norris, B.; Amundson, J.; Mathematics and Computer Science; Tech-X Corp; FNAL

    2007-01-01

    Synergia2 is a beam dynamics modeling and simulation application for high-energy accelerators such as the Tevatron at Fermilab and the International Linear Collider, which is now under planning and development. Synergia2 is a hybrid, multilanguage software package comprised of two separate accelerator physics packages (Synergia and MaryLie/Impact) and one high-performance computer science package (PETSc). We describe our approach to producing a set of beam dynamics-specific software components based on the Common Component Architecture specification. Among other topics, we describe particular experiences with the following tasks: using Python steering to guide the creation of interfaces and to prototype components; working with legacy Fortran codes; and an example component-based, beam dynamics simulation.

  1. The Particle Accelerator Simulation Code PyORBIT

    SciTech Connect

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M; Shishlo, Andrei P

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.

  2. Primary simulation and experimental results of a coaxial plasma accelerator

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Huang, J.; Han, J.; Zhang, Z.; Quan, R.; Wang, L.; Yang, X.; Feng, C.

    A coaxial plasma accelerator with a compressing coil is developed to simulate the impacting and erosion effect of space debris on exposed materials of spacecrafts During its adjustment operation some measurements are conducted including discharging current by Rogowski coil average plasma speed in the coaxial gun by magnetic coils and ejected particle speed by piezoelectric sensor etc In concert with the experiment a primary physical model is constructed in which only the coaxial gun is taken into account with the compressor coil not considered for its unimportant contribution to the plasma ejection speed The calculation results by the model agree well with the diagnostic results considering some assumptions for simplification Based on the simulation result some important suggestions for optimum design and adjustment of the accelerator are obtained for its later operation

  3. Note: Numerical simulation and experimental validation of accelerating voltage formation for a pulsed electron accelerator

    SciTech Connect

    Egorov, I.

    2014-06-15

    This paper describes the development of a computation model of a pulsed voltage generator for a repetitive electron accelerator. The model is based on a principle circuit of the generator, supplemented with the parasitics elements of the construction. Verification of the principle model was achieved by comparison of simulation with experimental results, where reasonable agreement was demonstrated for a wide range of generator load resistance.

  4. Synergia: a modern tool for accelerator physics simulation

    SciTech Connect

    Spentzouris, P.; Amundson, J.; /Fermilab

    2004-10-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. Synergia is a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles.

  5. Community Petascale Project for Accelerator Science and Simulation

    SciTech Connect

    Warren B. Mori

    2013-02-01

    The UCLA Plasma Simulation Group is a major partner of the "Community Petascale Project for Accelerator Science and Simulation. This is the final technical report. We include an overall summary, a list of publications and individual progress reports for each years. During the past five years we have made tremendous progress in enhancing the capabilities of OSIRIS and QuickPIC, in developing new algorithms and data structures for PIC codes to run on GPUS and many future core architectures, and in using these codes to model experiments and in making new scientific discoveries. Here we summarize some highlights for which SciDAC was a major contributor.

  6. Design of Accelerator Online Simulator Server Using Structured Data

    SciTech Connect

    Shen, Guobao; Chu, Chungming; Wu, Juhao; Kraimer, Martin; /Argonne

    2012-07-06

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describes the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.

  7. Accelerating particle-in-cell simulations using multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ricketson, Lee

    2015-11-01

    Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.

  8. COMPASS, the COMmunity Petascale Project for Accelerator Science and Simulation, a broad computational accelerator physics initiative

    SciTech Connect

    J.R. Cary; P. Spentzouris; J. Amundson; L. McInnes; M. Borland; B. Mustapha; B. Norris; P. Ostroumov; Y. Wang; W. Fischer; A. Fedotov; I. Ben-Zvi; R. Ryne; E. Esarey; C. Geddes; J. Qiang; E. Ng; S. Li; C. Ng; R. Lee; L. Merminga; H. Wang; D.L. Bruhwiler; D. Dechow; P. Mullowney; P. Messmer; C. Nieter; S. Ovtchinnikov; K. Paul; P. Stoltz; D. Wade-Stein; W.B. Mori; V. Decyk; C.K. Huang; W. Lu; M. Tzoufras; F. Tsung; M. Zhou; G.R. Werner; T. Antonsen; T. Katsouleas

    2007-06-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  9. COMPASS, the COMmunity Petascale Project for Accelerator Science And Simulation, a Broad Computational Accelerator Physics Initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; /Jefferson Lab /Tech-X, Boulder /UCLA /Colorado U. /Maryland U. /Southern California U.

    2007-11-09

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  10. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-07-16

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  11. COMPASS, the COMmunity petascale project for accelerator science and simulation, a broad computational accelerator physics initiative

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D. L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W. B.; Decyk, V.; Huang, C. K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G. R.; Antonsen, T.; Katsouleas, T.

    2007-07-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  12. Simulating Electron Clouds in Heavy-Ion Accelerators

    SciTech Connect

    Cohen, R.H.; Friedman, A.; Kireeff Covo, M.; Lund, S.M.; Molvik,A.W.; Bieniosek, F.M.; Seidl, P.A.; Vay, J-L.; Stoltz, P.; Veitzer, S.

    2005-04-07

    Contaminating clouds of electrons are a concern for most accelerators of positive-charged particles, but there are some unique aspects of heavy-ion accelerators for fusion and high-energy density physics which make modeling such clouds especially challenging. In particular, self-consistent electron and ion simulation is required, including a particle advance scheme which can follow electrons in regions where electrons are strongly-, weakly-, and un-magnetized. They describe their approach to such self-consistency, and in particular a scheme for interpolating between full-orbit (Boris) and drift-kinetic particle pushes that enables electron time steps long compared to the typical gyro period in the magnets. They present tests and applications: simulation of electron clouds produced by three different kinds of sources indicates the sensitivity of the cloud shape to the nature of the source; first-of-a-kind self-consistent simulation of electron-cloud experiments on the High-Current Experiment (HCX) at Lawrence Berkeley National Laboratory, in which the machine can be flooded with electrons released by impact of the ion beam and an end plate, demonstrate the ability to reproduce key features of the ion-beam phase space; and simulation of a two-stream instability of thin beams in a magnetic field demonstrates the ability of the large-timestep mover to accurately calculate the instability.

  13. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  14. Estimation of direct laser acceleration in laser wakefield accelerators using particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Shaw, J. L.; Lemos, N.; Marsh, K. A.; Tsung, F. S.; Mori, W. B.; Joshi, C.

    2016-03-01

    Many current laser wakefield acceleration (LWFA) experiments are carried out in a regime where the laser pulse length is on the order of or longer than the wake wavelength and where ionization injection is employed to inject electrons into the wake. In these experiments, the electrons can gain a significant amount of energy from the direct laser acceleration (DLA) mechanism as well as the usual LWFA mechanism. Particle-in-cell (PIC) codes are frequently used to discern the relative contribution of these two mechanisms. However, if the longitudinal resolution used in the PIC simulations is inadequate, it can produce numerical heating that can overestimate the transverse motion, which is important in determining the energy gain due to DLA. We have therefore carried out a systematic study of this LWFA regime by varying the longitudinal resolution of PIC simulations and then examining the energy gain characteristics of both the highest-energy electrons and the bulk electrons. By calculating the contribution of DLA to the final energies of the electrons produced from the LWFA, we find that even at the highest longitudinal resolutions, DLA contributes a significant portion of the energy gained by the highest-energy electrons and also contributes to accelerating the bulk of the charge in the electron beam produced by the LWFA.

  15. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  16. Accelerating transient simulation of linear reduced order models.

    SciTech Connect

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  17. LEGO - A Class Library for Accelerator Design and Simulation

    SciTech Connect

    Cai, Yunhai

    1998-11-19

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulate the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given.

  18. Simulations of Relativistic Collisionless Shocks: Shock Structure and Particle Acceleration

    SciTech Connect

    Spitkovsky, Anatoly; /KIPAC, Menlo Park

    2006-04-10

    We discuss 3D simulations of relativistic collisionless shocks in electron-positron pair plasmas using the particle-in-cell (PIC) method. The shock structure is mainly controlled by the shock's magnetization (''sigma'' parameter). We demonstrate how the structure of the shock varies as a function of sigma for perpendicular shocks. At low magnetizations the shock is mediated mainly by the Weibel instability which generates transient magnetic fields that can exceed the initial field. At larger magnetizations the shock is dominated by magnetic reflections. We demonstrate where the transition occurs and argue that it is impossible to have very low magnetization collisionless shocks in nature (in more than one spatial dimension). We further discuss the acceleration properties of these shocks, and show that higher magnetization perpendicular shocks do not efficiently accelerate nonthermal particles in 3D. Among other astrophysical applications, this may pose a restriction on the structure and composition of gamma-ray bursts and pulsar wind outflows.

  19. A Multi-Paradigm Modeling Framework to Simulate Dynamic Reciprocity in a Bioreactor

    PubMed Central

    Kaul, Himanshu; Cui, Zhanfeng; Ventikos, Yiannis

    2013-01-01

    Despite numerous technology advances, bioreactors are still mostly utilized as functional black-boxes where trial and error eventually leads to the desirable cellular outcome. Investigators have applied various computational approaches to understand the impact the internal dynamics of such devices has on overall cell growth, but such models cannot provide a comprehensive perspective regarding the system dynamics, due to limitations inherent to the underlying approaches. In this study, a novel multi-paradigm modeling platform capable of simulating the dynamic bidirectional relationship between cells and their microenvironment is presented. Designing the modeling platform entailed combining and coupling fully an agent-based modeling platform with a transport phenomena computational modeling framework. To demonstrate capability, the platform was used to study the impact of bioreactor parameters on the overall cell population behavior and vice versa. In order to achieve this, virtual bioreactors were constructed and seeded. The virtual cells, guided by a set of rules involving the simulated mass transport inside the bioreactor, as well as cell-related probabilistic parameters, were capable of displaying an array of behaviors such as proliferation, migration, chemotaxis and apoptosis. In this way the platform was shown to capture not only the impact of bioreactor transport processes on cellular behavior but also the influence that cellular activity wields on that very same local mass transport, thereby influencing overall cell growth. The platform was validated by simulating cellular chemotaxis in a virtual direct visualization chamber and comparing the simulation with its experimental analogue. The results presented in this paper are in agreement with published models of similar flavor. The modeling platform can be used as a concept selection tool to optimize bioreactor design specifications. PMID:23555740

  20. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in

  1. Isogeometric simulation of Lorentz detuning in superconducting accelerator cavities

    NASA Astrophysics Data System (ADS)

    Corno, Jacopo; de Falco, Carlo; De Gersem, Herbert; Schöps, Sebastian

    2016-04-01

    Cavities in linear accelerators suffer from eigenfrequency shifts due to mechanical deformation caused by the electromagnetic radiation pressure, a phenomenon known as Lorentz detuning. Estimating the frequency shift up to the needed accuracy by means of standard Finite Element Methods, is a complex task due to the non exact representation of the geometry and due to the necessity for mesh refinement when using low order basis functions. In this paper, we use Isogeometric Analysis for discretizing both mechanical deformations and electromagnetic fields in a coupled multiphysics simulation approach. The combined high-order approximation of both leads to high accuracies at a substantially lower computational cost.

  2. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed

  3. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  4. Simulation of PEP-II Accelerator Backgrounds Using TURTLE

    SciTech Connect

    Barlow, R.J.; Fieguth, T.; Kozanecki, W.; Majewski, S.A.; Roudeau, P.; Stocchi, A.; /Orsay, LAL

    2006-02-15

    We present studies of accelerator-induced backgrounds in the BaBar detector at the SLAC B-Factory, carried out using LPTURTLE, a modified version of the DECAY TURTLE simulation package. Lost-particle backgrounds in PEP-II are dominated by a combination of beam-gas bremstrahlung, beam-gas Coulomb scattering, radiative-Bhabha events and beam-beam blow-up. The radiation damage and detector occupancy caused by the associated electromagnetic shower debris can limit the usable luminosity. In order to understand and mitigate such backgrounds, we have performed a full program of beam-gas and luminosity-background simulations, that include the effects of the detector solenoidal field, detailed modeling of limiting apertures in both collider rings, and optimization of the betatron collimation scheme in the presence of large transverse tails.

  5. Binomial distribution based τ-leap accelerated stochastic simulation

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2005-01-01

    Recently, Gillespie introduced the τ-leap approximate, accelerated stochastic Monte Carlo method for well-mixed reacting systems [J. Chem. Phys. 115, 1716 (2001)]. In each time increment of that method, one executes a number of reaction events, selected randomly from a Poisson distribution, to enable simulation of long times. Here we introduce a binomial distribution τ-leap algorithm (abbreviated as BD-τ method). This method combines the bounded nature of the binomial distribution variable with the limiting reactant and constrained firing concepts to avoid negative populations encountered in the original τ-leap method of Gillespie for large time increments, and thus conserve mass. Simulations using prototype reaction networks show that the BD-τ method is more accurate than the original method for comparable coarse-graining in time.

  6. Wakefield Simulations for the Laser Acceleration Experiment at SLAC

    SciTech Connect

    Ng, Johnny

    2012-04-18

    Laser-driven acceleration in dielectric photonic band gap structures can provide gradients on the order of GeV/m. The small transverse dimension of the structure, on the order of the laser wavelength, presents interesting wakefield-related issues. Higher order modes can seriously degrade beam quality, and a detailed understanding is needed to mitigate such effects. On the other hand, wakefields also provide a direct way to probe the interaction of a relativistic bunch with the synchronous modes supported by the structure. Simulation studies have been carried out as part of the effort to understand the impact on beam dynamics, and to compare with data from beam experiments designed to characterize candidate structures. In this paper, we present simulation results of wakefields excited by a sub-wavelength bunch in optical photonic band gap structures.

  7. GPU Accelerated Numerical Simulation of Viscous Flow Down a Slope

    NASA Astrophysics Data System (ADS)

    Gygax, Remo; Räss, Ludovic; Omlin, Samuel; Podladchikov, Yuri; Jaboyedoff, Michel

    2014-05-01

    Numerical simulations are an effective tool in natural risk analysis. They are useful to determine the propagation and the runout distance of gravity driven movements such as debris flows or landslides. To evaluate these processes an approach on analogue laboratory experiments and a GPU accelerated numerical simulation of the flow of a viscous liquid down an inclined slope is considered. The physical processes underlying large gravity driven flows share certain aspects with the propagation of debris mass in a rockslide and the spreading of water waves. Several studies have shown that the numerical implementation of the physical processes of viscous flow produce a good fit with the observation of experiments in laboratory in both a quantitative and a qualitative way. When considering a process that is this far explored we can concentrate on its numerical transcription and the application of the code in a GPU accelerated environment to obtain a 3D simulation. The objective of providing a numerical solution in high resolution by NVIDIA-CUDA GPU parallel processing is to increase the speed of the simulation and the accuracy on the prediction. The main goal is to write an easily adaptable and as short as possible code on the widely used platform MATLAB, which will be translated to C-CUDA to achieve higher resolution and processing speed while running on a NVIDIA graphics card cluster. The numerical model, based on the finite difference scheme, is compared to analogue laboratory experiments. This way our numerical model parameters are adjusted to reproduce the effective movements observed by high-speed camera acquisitions during the laboratory experiments.

  8. Using the Statecharts paradigm for simulation of patient flow in surgical care.

    PubMed

    Sobolev, Boris; Harel, David; Vasilakis, Christos; Levy, Adrian

    2008-03-01

    Computer simulation of patient flow has been used extensively to assess the impacts of changes in the management of surgical care. However, little research is available on the utility of existing modeling techniques. The purpose of this paper is to examine the capacity of Statecharts, a system of graphical specification, for constructing a discrete-event simulation model of the perioperative process. The Statecharts specification paradigm was originally developed for representing reactive systems by extending the formalism of finite-state machines through notions of hierarchy, parallelism, and event broadcasting. Hierarchy permits subordination between states so that one state may contain other states. Parallelism permits more than one state to be active at any given time. Broadcasting of events allows one state to detect changes in another state. In the context of the peri-operative process, hierarchy provides the means to describe steps within activities and to cluster related activities, parallelism provides the means to specify concurrent activities, and event broadcasting provides the means to trigger a series of actions in one activity according to transitions that occur in another activity. Combined with hierarchy and parallelism, event broadcasting offers a convenient way to describe the interaction of concurrent activities. We applied the Statecharts formalism to describe the progress of individual patients through surgical care as a series of asynchronous updates in patient records generated in reaction to events produced by parallel finite-state machines representing concurrent clinical and managerial activities. We conclude that Statecharts capture successfully the behavioral aspects of surgical care delivery by specifying permissible chronology of events, conditions, and actions. PMID:18390170

  9. Neutron Scattering Simulations at the University of Kentucky Accelerator Laboratory

    NASA Astrophysics Data System (ADS)

    Nguyen, Thienan; Jackson, Daniel; Hicks, S. F.; Rice, Ben; Vanhoy, J. R.

    2015-10-01

    The Monte-Carlo N-Particle Transport code (MCNP) has many applications ranging from radiography to reactor design. It has particle interaction capabilities, making it useful for simulating neutron collisions on surfaces of varying compositions. The neutron flux within the accelerator complex at the University of Kentucky was simulated using MCNP. With it, the complex's capabilities to contain and thermalize 7 MeV neutrons produced via 2H(d,n)3He source reaction to an acceptable level inside the neutron hall and adjoining rooms were analyzed. This will aid in confirming the safety of researchers who are working in the adjacent control room. Additionally, the neutron transport simulation was used to analyze the impact of the collimator copper shielding on various detectors located around the neutron scattering hall. The purpose of this was to attempt to explain any background neutrons that are observed at these detectors. The simulation shows that the complex performs very well with regards to neutron containment and thermalization. Also, the tracking information for the paths taken by the neutrons show that most of the neutrons' lives are spent inside the neutron hall. Finally, the neutron counts were analyzed at the positions of the neutron monitor detectors located at 90 and 45 degrees relative to the incident beam direction. This project was supported in part by the DOE NEUP Grant NU-12-KY-UK-0201-05 and the Donald A. Cowan Physics Institute at the University of Dallas.

  10. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations.

    PubMed

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10(6)-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  11. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques

  12. Accelerating climate simulation analytics via multilevel aggregation and synthesis

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Ravindran, Krishnaraj; Gunasekaran, Raghul; Vazhkudai, Sudharshan; Butt, Ali

    2015-04-01

    A typical set of ultra high resolution (0.25 deg) climate simulation experiments produce over 50,000 files, ranging in sizes from 101 MB to 102 GB each - for a total volume of nearly 1 PB of data. The execution of the experiments will require over 100 Million CPU hours on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF). The output from the simulations must then be archived, analyzed, distributed to the project partners in a timely manner. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. But data movement is one of the most expensive and time consuming steps in the scientific workflow. It is be expedient to complete the diagnostics and analytics before the files are archived for long term storage. Nevertheless, it is often necessary to fetch the files from archive for further analysis. We are implementing a solution to query, extract, index and summarize key statistical information from the individual CF-compliant netCDF files that are then stored for ready-access in a database. The contents of the database can be related back to the archived files from which they were extracted. The statistical information can be quickly aggregated to provide meaningful statistical summaries that could then be related to observations and/or other simulation results for synthesis and further inference. The scientific workflow at OLCF, augmented by expedited analytics capabilities, will allow the users of our systems to shorten the time required to derive meaningful and relevant science results. We will illustrate some of the timesaving benefits via a few typical use cases, based on recent large-scale simulation experiments using the Community Earth System Model (CESM) and the DOE Accelerated Climate Model for Energy (ACME).

  13. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  14. Final Progress Report - Heavy Ion Accelerator Theory and Simulation

    SciTech Connect

    Haber, Irving

    2009-10-31

    The use of a beam of heavy ions to heat a target for the study of warm dense matter physics, high energy density physics, and ultimately to ignite an inertial fusion pellet, requires the achievement of beam intensities somewhat greater than have traditionally been obtained using conventional accelerator technology. The research program described here has substantially contributed to understanding the basic nonlinear intense-beam physics that is central to the attainment of the requisite intensities. Since it is very difficult to reverse intensity dilution, avoiding excessive dilution over the entire beam lifetime is necessary for achieving the required beam intensities on target. The central emphasis in this research has therefore been on understanding the nonlinear mechanisms that are responsible for intensity dilution and which generally occur when intense space-charge-dominated beams are not in detailed equilibrium with the external forces used to confine them. This is an important area of study because such lack of detailed equilibrium can be an unavoidable consequence of the beam manipulations such as acceleration, bunching, and focusing necessary to attain sufficient intensity on target. The primary tool employed in this effort has been the use of simulation, particularly the WARP code, in concert with experiment, to identify the nonlinear dynamical characteristics that are important in practical high intensity accelerators. This research has gradually made a transition from the study of idealized systems and comparisons with theory, to study the fundamental scaling of intensity dilution in intense beams, and more recently to explicit identification of the mechanisms relevant to actual experiments. This work consists of two categories; work in direct support beam physics directly applicable to NDCX and a larger effort to further the general understanding of space-charge-dominated beam physics.

  15. Reduced-order simulation of large accelerator structuresa)

    NASA Astrophysics Data System (ADS)

    Cooke, S. J.

    2008-05-01

    Simulating electromagnetic waves inside finite periodic or almost periodic three-dimensional structures is important to research in linear particle acceleration, high power microwave generation, and photonic band gap structures. While eigenmodes of periodic structures can be determined from analysis of a single unit cell, based on Floquet theory, the general case of aperiodic structures, with defects or nonuniform properties, typically requires 3D electromagnetic simulation of the entire structure. When the structure is large and high accuracy is necessary this can require high-performance computing techniques to obtain even a few eigenmodes [Z. Li et al., Nucl. Instrum. Methods Phys. Res., Sect. A 558, 168 (2006)]. To confront this problem, we describe an efficient, field-based algorithm that can accurately determine the complete eigenmode spectrum for extended aperiodic structures, up to some chosen frequency limit. The new method combines domain decomposition with a nontraditional, dual eigenmode representation of the fields local to each cell of the structure. Two related boundary value eigenproblems are solved numerically in each cell, with (a) electrically shielded, and (b) magnetically shielded interfaces, to determine a combined set of basis fields. By using the dual solutions in our field representation we accurately represent both the electric and magnetic surface currents that mediate coupling at the interfaces between adjacent cells. The solution is uniformly convergent, so that typically only a few modes are used in each cell. We present results from 2D and 3D simulations that demonstrate the speed and low computational needs of the algorithm.

  16. Requirements for Simulating Space Radiation With Particle Accelerators

    NASA Technical Reports Server (NTRS)

    Schimmerling, W.; Wilson, J. W.; Cucinotta, F.; Kim, M-H Y.

    2004-01-01

    Interplanetary space radiation consists of fully ionized nuclei of atomic elements with high energy for which only the few lowest energy ions can be stopped in shielding materials. The health risk from exposure to these ions and their secondary radiations generated in the materials of spacecraft and planetary surface enclosures is a major limiting factor in the management of space radiation risk. Accurate risk prediction depends on a knowledge of basic radiobiological mechanisms and how they are modified in the living tissues of a whole organism. To a large extent, this knowledge is not currently available. It is best developed at ground-based laboratories, using particle accelerator beams to simulate the components of space radiation. Different particles, in different energy regions, are required to study different biological effects, including beams of argon and iron nuclei in the energy range 600 to several thousand MeV/nucleon and carbon beams in the energy range of approximately 100 MeV/nucleon to approximately 1000 MeV/nucleon. Three facilities, one each in the United States, in Germany and in Japan, currently have the partial capability to satisfy these constraints. A facility has been proposed using the Brookhaven National Laboratory Booster Synchrotron in the United States; in conjunction with other on-site accelerators, it will be able to provide the full range of heavy ion beams and energies required. International cooperation in the use of these facilities is essential to the development of a safe international space program.

  17. Active microrheology of Brownian suspensions via Accelerated Stokesian Dynamics simulations

    NASA Astrophysics Data System (ADS)

    Chu, Henry; Su, Yu; Gu, Kevin; Hoh, Nicholas; Zia, Roseanna

    2015-11-01

    The non-equilibrium rheological response of colloidal suspensions is studied via active microrheology utilizing Accelerated Stokesian Dynamics simulations. In our recent work, we derived the theory for micro-diffusivity and suspension stress in dilute suspensions of hydrodynamically interacting colloids. This work revealed that force-induced diffusion is anisotropic, with qualitative differences between diffusion along the line of the external force and that transverse to it, and connected these effects to the role of hydrodynamic, interparticle, and Brownian forces. This work also revealed that these forces play a similar qualitative role in the anisotropy of the stress and in the evolution of the non-equilibrium osmotic pressure. Here, we show that theoretical predictions hold for suspensions ranging from dilute to near maximum packing, and for a range of flow strengths from near-equilibrium to the pure-hydrodynamic limit.

  18. Simulation of laser wakefield acceleration of an ultrashort electron bunch.

    PubMed

    Reitsma, A J; Goloviznin, V V; Kamp, L P; Schep, T J

    2001-04-01

    The dynamics of the acceleration of a short electron bunch in a strong plasma wave excited by a laser pulse in a plasma channel is studied both analytically and numerically in slab geometry. In our simulations, a fully nonlinear, relativistic hydrodynamic description for the plasma wave is combined with particle-in-cell methods for the description of the bunch. Collective self-interactions within the bunch are fully taken into account. The existence of adiabatic invariants of motion is shown to have important implications for the final beam quality. Similar to the one-dimensional case, the natural evolution of the bunch is shown to lead, under proper initial conditions, to a minimum in the relative energy spread. PMID:11308961

  19. Kinetic Simulations of SNR Shocks- Prospects for Particle Acceleration

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra

    2006-02-01

    Recent kinetic simulations of supercritical, quasi-perpendicular shocksyield time varying shock solutions that cyclically reform on thespatio-temporal scales of the incoming protons. Whether a shock solution isstationary or reforming depends upon the plasma parameters which, for SNRshocks are ill defined but believed to be within thetime-dependent regime. We will first review the structure and evolution ofthe time dependentsolutions, and the acceleration processes of the ions and electrons inthese time dependent fields, for a proton-electron plasma. We will thenpresent recent results for a three component plasma: backgroundprotons; electrons; and a second heavier ion population. These accelerationmechanisms may generate a suprathermalïnjection$quot; population - a seed population for subsequentacceleration at the shock, which can in turn generate particles at cosmicray energies.

  20. CUDA accelerated simulation of needle insertions in deformable tissue

    NASA Astrophysics Data System (ADS)

    Patriciu, Alexandru

    2012-10-01

    This paper presents a stiff needle-deformable tissue interaction model. The model uses a mesh-less discretization of continuum; avoiding thus the expensive remeshing required by the finite element models. The proposed model can accommodate both linear and nonlinear material characteristics. The needle-deformable tissue interaction is modeled through fundamental boundaries. The forces applied by the needle on the tissue are divided in tangent forces and constraint forces. The constraint forces are adaptively computed such that the material is properly constrained by the needle. The implementation is accelerated using NVidia CUDA. We present detailed analysis of the execution timing in both serial and parallel case. The proposed needle insertion model was integrated in a custom software that loads DICOM images, generate the deformable model, and can simulate different insertion strategies.

  1. Simulation of electromagnetic scattering with stationary or accelerating targets

    NASA Astrophysics Data System (ADS)

    Funaro, Daniele; Kashdan, Eugene

    2015-12-01

    The scattering of electromagnetic waves by an obstacle is analyzed through a set of partial differential equations combining the Maxwell's model with the mechanics of fluids. Solitary type EM waves, having compact support, may easily be modeled in this context since they turn out to be explicit solutions. From the numerical viewpoint, the interaction of these waves with a material body is examined. Computations are carried out via a parallel high-order finite-differences code. Due to the presence of a gradient of pressure in the model equations, waves hitting the obstacle may impart acceleration to it. Some explicative 2D dynamical configurations are then studied, enabling the simulation of photon-particle iterations through classical arguments.

  2. Reduced-Order Simulation of Large Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Cooke, Simon

    2007-11-01

    Simulating electromagnetic waves inside finite periodic or almost periodic three-dimensional structures is important to research in linear particle acceleration, high power microwave generation, and photonic bandgap structures. While eigenmodes of periodic structures can be determined from analysis of a single unit cell, based on Floquet theory, the general case of aperiodic structures, with defects or non-uniform properties, typically requires 3D electromagnetic simulation of the entire structure. When the structure is large and high accuracy is necessary this can require high-performance computing techniques to obtain even a few eigenmodes [1]. To confront this problem, we describe an efficient, field-based algorithm that can accurately determine the complete eigenmode spectrum for extended aperiodic structures, up to some chosen frequency limit. The new method combines domain decomposition with a non-traditional, dual eigenmode representation of the fields local to each cell of the structure. Two related boundary value eigenproblems are solved numerically in each cell, with (a) electrically shielded, and (b) magnetically shielded interfaces, to determine a combined set of basis fields. By using the dual solutions in our field representation we accurately represent both the electric and magnetic surface currents that mediate coupling at the interfaces between adjacent cells. The solution is uniformly convergent, so that typically only a few modes are used in each cell. We present results from 3D simulations that demonstrate the speed and low computational needs of the algorithm. [1] Z. Li, et al, Nucl. Instrum. Methods Phys. Res., Sect. A 558 (2006), 168-174.

  3. Saturn: A large area x-ray simulation accelerator

    SciTech Connect

    Bloomquist, D.D.; Stinnett, R.W.; McDaniel, D.H.; Lee, J.R.; Sharpe, A.W.; Halbleib, J.A.; Schlitt, L.G.; Spence, P.W.; Corcoran, P.

    1987-01-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area x-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology made by the ICF program in recent years and much of the existing PBFA-I support system. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an x-ray exposure capability of 5 x 10/sup 12/ rads/s (Si) and 5 cal/g (Au) over 500 cm/sup 2/.

  4. Laboratory Simulation of Ion Acceleration Mechanisms in the Suprauroral Region.

    NASA Astrophysics Data System (ADS)

    Koslover, Robert Avner

    1987-09-01

    We report the results of a series of laboratory experiments intended to simulate particular aspects of ion acceleration processes that have been observed or are believed to occur in the suprauroral region of the Earth's magnetosphere. Beam-generated lower hybrid waves (LHW) and current-driven electrostatic ion cyclotron waves (EICW) have both been proposed as responsible for low-altitude perpendicular ion acceleration, leading to the formation of ion conics at higher altitudes (after mirroring in the geomagnetic field). We model, by experiments in the laboratory, the mechanisms generating the ion velocity distributions and radio frequency waves observed in the suprauroral region. Experiments were performed in two linear plasma devices: the UCI Q -machine and UCI Magnetic Mirror. RF waves were launched by antennas or excited by electron currents or beams. Laser induced fluorescence (LIF) provided a sensitive non-perturbing diagnostic for ion velocity distributions. RF and Langmuir probes were used for electrical measurements. Antenna launched LHW produced considerable perpendicular ion heating, generating 'tail' formation followed by a bulk 'maxwellian' heating. Both broadband and narrowband LHW produced similar effects. Frequency spectra displayed multiple harmonics of the input antenna signal and also signals of lower frequency, the latter identified as due to parametric decay. Operating the UCI Magnetic Mirror as a double plasma device, a low energy, low density electron beam was shown to generate very broadband noise above the LH resonance frequency. Two-probe correlation studies indicated the existence of a wide band of k values as well. The noise has been tentatively identified as beam-generated LHW. In order to study the formation of ion conics, a new diagnostic method making use of LIF and computed tomography was developed. A description is given of this new technique, which we call optical tomography. Using this approach, we successfully observed the

  5. Simulation of particle acceleration in the PLASMONX project

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo

    2010-02-01

    In this paper I will present some numerical studies and parameter scans performed with the electromagnetic, rela-tivistic, fully-self consistent particle-in-cell (PIC) code ALaDyn (Acceleration by LAser and DYNamics of charged particles), concerning electron acceleration via plasma waves in the framework of the INFN-PLASMONX (PLASma acceleration and MONochromatic X-ray production) project. In particular I will focus on the modelling of the SITE (Self Injection Test Experiment) which will be a relevant part of the commissioning of the FLAME laser. Some issues related to the quality of the accelerated bunch will be discussed.

  6. An investigation into the feasibility of implementing fractal paradigms to simulate cancellous bone structure.

    PubMed

    Haire, T J; Ganney, P S; Langton, C M

    2001-01-01

    Cancellous bone consists of a framework of solid trabeculae interspersed with bone marrow. The structure of the bone tissue framework is highly convoluted and complex, being fractal and statistically self-similar over a limited range of magnifications. To date, the structure of natural cancellous bone tissue has been defined using 2D and 3D imaging, with no facility to modify and control the structure. The potential of four computer-generated paradigms has been reviewed based upon knowledge of other fractal structures and chaotic systems, namely Diffusion Limited Aggregation (DLA), Percolation and Epidemics, Cellular Automata, and a regular Grid with randomly relocated nodes. The resulting structures were compared for their ability to create realistic structures of cancellous bone rather than reflecting growth and form processes. Although the creation of realistic computer-generated cancellous bone structures is difficult, it should not be impossible. Future work considering the combination of fractal and chaotic paradigms is underway. PMID:11328644

  7. Accelerated Molecular Dynamics Simulations of Reactive Hydrocarbon Systems

    SciTech Connect

    Stuart, Steven J.

    2014-02-25

    The research activities in this project consisted of four different sub-projects. Three different accelerated dynamics techniques (parallel replica dynamics, hyperdynamics, and temperature-accelerated dynamics) were applied to the modeling of pyrolysis of hydrocarbons. In addition, parallel replica dynamics was applied to modeling of polymerization.

  8. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    SciTech Connect

    Badal, Andreu; Badano, Aldo

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  9. A Selective Review of Simulated Driving Studies: Combining Naturalistic and Hybrid Paradigms, Analysis Approaches, and Future Directions

    PubMed Central

    Calhoun, V. D.; Pearlson, G. D.

    2011-01-01

    Naturalistic paradigms such as movie watching or simulated driving that mimic closely real-world complex activities are becoming more widely used in functional magnetic resonance imaging (fMRI) studies both because of their ability to robustly stimulate brain connectivity and the availability of analysis methods which are able to capitalize on connectivity within and among intrinsic brain networks identified both during a task and in resting fMRI data. In this paper we review over a decade of work from our group and others on the use of simulated driving paradigms to study both the healthy brain as well as the effects of acute alcohol administration on functional connectivity during such paradigms. We briefly review our initial work focused on the configuration of the driving simulator and the analysis strategies. We then describe in more detail several recent studies from our group including a hybrid study examining distracted driving and compare resulting data with those from a separate visual oddball task. The analysis of these data were performed primarily using a combination of group independent component analysis (ICA) and the general linear model (GLM) and in the various studies we highlight novel findings which result from an analysis of either 1) within-network connectivity, 2) inter-network connectivity, also called functional network connectivity, or 3) the degree to which the modulation of the various intrinsic networks were associated with the alcohol administration and the task context. Despite the fact that the behavioral effects of alcohol intoxication are relatively well known, there is still much to discover on how acute alcohol exposure modulates brain function in a selective manner, associated with behavioral alterations. Through the above studies, we have learned more regarding the impact of acute alcohol intoxication on organization of the brain’s intrinsic connectivity networks during performance of a complex, real-world cognitive operation

  10. A selective review of simulated driving studies: Combining naturalistic and hybrid paradigms, analysis approaches, and future directions.

    PubMed

    Calhoun, V D; Pearlson, G D

    2012-01-01

    Naturalistic paradigms such as movie watching or simulated driving that mimic closely real-world complex activities are becoming more widely used in functional magnetic resonance imaging (fMRI) studies both because of their ability to robustly stimulate brain connectivity and the availability of analysis methods which are able to capitalize on connectivity within and among intrinsic brain networks identified both during a task and in resting fMRI data. In this paper we review over a decade of work from our group and others on the use of simulated driving paradigms to study both the healthy brain as well as the effects of acute alcohol administration on functional connectivity during such paradigms. We briefly review our initial work focused on the configuration of the driving simulator and the analysis strategies. We then describe in more detail several recent studies from our group including a hybrid study examining distracted driving and compare resulting data with those from a separate visual oddball task (Fig. 6). The analysis of these data was performed primarily using a combination of group independent component analysis (ICA) and the general linear model (GLM) and in the various studies we highlight novel findings which result from an analysis of either 1) within-network connectivity, 2) inter-network connectivity, also called functional network connectivity, or 3) the degree to which the modulation of the various intrinsic networks were associated with the alcohol administration and the task context. Despite the fact that the behavioral effects of alcohol intoxication are relatively well known, there is still much to discover on how acute alcohol exposure modulates brain function in a selective manner, associated with behavioral alterations. Through the above studies, we have learned more regarding the impact of acute alcohol intoxication on organization of the brain's intrinsic connectivity networks during performance of a complex, real-world cognitive

  11. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    SciTech Connect

    Crabtree, George; Glotzer, Sharon; McCurdy, Bill; Roberto, Jim

    2010-07-26

    enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together

  12. Final Report for "Community Petascale Project for Accelerator Science and Simulations".

    SciTech Connect

    Cary, J. R.; Bruhwiler, D. L.; Stoltz, P. H.; Cormier-Michel, E.; Cowan, B.; Schwartz, B. T.; Bell, G.; Paul, K.; Veitzer, S.

    2013-04-19

    This final report describes the work that has been accomplished over the past 5 years under the Community Petascale Project for Accelerator and Simulations (ComPASS) at Tech-X Corporation. Tech-X had been involved in the full range of ComPASS activities with simulation of laser plasma accelerator concepts, mainly in collaboration with LOASIS program at LBNL, simulation of coherent electron cooling in collaboration with BNL, modeling of electron clouds in high intensity accelerators, in collaboration with researchers at Fermilab and accurate modeling of superconducting RF cavity in collaboration with Fermilab, JLab and Cockcroft Institute in the UK.

  13. Linear Accelerator and Gamma Knife-Based Stereotactic Cranial Radiosurgery: Challenges and Successes of Existing Quality Assurance Guidelines and Paradigms

    SciTech Connect

    Goetsch, Steven J.

    2008-05-01

    Intracranial stereotactic radiosurgery has been practiced since 1951. The technique has expanded from a single dedicated unit in Stockholm in 1968 to hundreds of centers performing an estimated 100,000 Gamma Knife and linear accelerator cases in 2005. The radiation dosimetry of small photon fields used in this technique has been well explored in the past 15 years. Quality assurance recommendations have been promulgated in refereed reports and by several national and international professional societies since 1991. The field has survived several reported treatment errors and incidents, generally reacting by strengthening standards and precautions. An increasing number of computer-controlled and robotic-dedicated treatment units are expanding the field and putting patients at risk of unforeseen errors. Revisions and updates to previously published quality assurance documents, and especially to radiation dosimetry protocols, are now needed to ensure continued successful procedures that minimize the risk of serious errors.

  14. Advanced visualization technology for terascale particle accelerator simulations

    SciTech Connect

    Ma, K-L; Schussman, G.; Wilson, B.; Ko, K.; Qiang, J.; Ryne, R.

    2002-11-16

    This paper presents two new hardware-assisted rendering techniques developed for interactive visualization of the terascale data generated from numerical modeling of next generation accelerator designs. The first technique, based on a hybrid rendering approach, makes possible interactive exploration of large-scale particle data from particle beam dynamics modeling. The second technique, based on a compact texture-enhanced representation, exploits the advanced features of commodity graphics cards to achieve perceptually effective visualization of the very dense and complex electromagnetic fields produced from the modeling of reflection and transmission properties of open structures in an accelerator design. Because of the collaborative nature of the overall accelerator modeling project, the visualization technology developed is for both desktop and remote visualization settings. We have tested the techniques using both time varying particle data sets containing up to one billion particle s per time step and electromagnetic field data sets with millions of mesh elements.

  15. Numerical simulations of the superdetonative ram accelerator combusting flow field

    NASA Technical Reports Server (NTRS)

    Soetrisno, Moeljo; Imlay, Scott T.; Roberts, Donald W.

    1993-01-01

    The effects of projectile canting and fins on the ram accelerator combusting flowfield and the possible cause of the ram accelerator unstart are investigated by performing axisymmetric, two-dimensional, and three-dimensional calculations. Calculations are performed using the INCA code for solving Navier-Stokes equations and a guasi-global combustion model of Westbrook and Dryer (1981, 1984), which includes N2 and nine reacting species (CH4, CO, CO2, H2, H, O2, O, OH, and H2O), which are allowed to undergo a 12-step reaction. It is found that, without canting, interactions between the fins, boundary layers, and combustion fronts are insufficient to unstart the projectile at superdetonative velocities. With canting, the projectile will unstart at flow conditions where it appears to accelerate without canting. Unstart occurs at some critical canting angle. It is also found that three-dimensionality plays an important role in the overall combustion process.

  16. Simulation of the flow field of a ram accelerator

    NASA Astrophysics Data System (ADS)

    Soetrisno, Moeljo; Imlay, Scott T.

    1991-06-01

    An effort is made to achieve a more complete numerical model than heretofore available for analysis and performance prediction regarding ram-accelerator projectiles, using the finite-rate chemistry code HANA. Results are presented from such analyses of a ram accelerator projectile operating in both the thermally-choked mode and the transdetonative mode. The flow field about the projectile, the complex oblique shock system, and the flow properties in the combusting region are detailed. The code uses a novel diagonal implicit solution algorithm which eliminates the expense of inverting the large block matrices arising in chemically reacting flows.

  17. Reactor for simulation and acceleration of solar ultraviolet damage

    NASA Technical Reports Server (NTRS)

    Laue, E.; Gupta, A.

    1979-01-01

    An environmental test chamber providing acceleration of UV radiation and precise temperature control (+ or -)1 C was designed, constructed and tested. This chamber allows acceleration of solar ultraviolet up to 30 suns while maintaining temperature of the absorbing surface at 30 C - 60 C. This test chamber utilizes a filtered medium pressure mercury arc as the source of radiation, and a combination of selenium radiometer and silicon radiometer to monitor solar ultraviolet (295-340 nm) and total radiant power output, respectively. Details of design and construction and operational procedures are presented along with typical test data.

  18. Acceleration techniques for dependability simulation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  19. The changing paradigm for integrated simulation in support of Command and Control (C2)

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Hieb, Michael

    2016-05-01

    Modern software and network technologies are on the verge of enabling what has eluded the simulation and operational communities for more than two decades, truly integrating simulation functionality into operational Command and Control (C2) capabilities. This deep integration will benefit multiple stakeholder communities from experimentation and test to training by providing predictive and advanced analytics. There is a new opportunity to support operations with simulation once a deep integration is achieved. While it is true that doctrinal and acquisition issues remain to be addressed, nonetheless it is increasingly obvious that few technical barriers persist. How will this change the way in which common simulation and operational data is stored and accessed? As the Services move towards single networks, will there be technical and policy issues associated with sharing those operational networks with simulation data, even if the simulation data is operational in nature (e.g., associated with planning)? How will data models that have traditionally been simulation only be merged in with operational data models? How will the issues of trust be addressed?

  20. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  1. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  2. Particle-in-Cell Simulations of Ponderomotive Particle Acceleration in a Plasma

    SciTech Connect

    Startsev, E.A.; McKinstrie, C.J.

    2003-06-17

    (B204)In previous publications the ponderomotive acceleration of electrons by an idealized (one-dimensional) circularly polarized laser pulse in a plasma was studied analytically. Acceleration gradients of order 100 GeV/m were predicted. To verify the predictions of the theoretical model, a two-dimensional relativistic particle-in-cell code was developed. Simulations of the interaction of a preaccelerated electron bunch with a realistic (two-dimensional)laser pulse in a plasma are presented and analyzed. The simulation results validate the theoretical model and show that significant ponderomotive acceleration is possible.

  3. Plasma Wakefield Acceleration Simulations with Multiple Electron Bunches

    NASA Astrophysics Data System (ADS)

    Kallos, Efthymios; Muggli, Patric; Yakimenko, Vitaly; Kusche, Karl; Park, Jangho; Babzien, Marcus; Lichtl, Adam

    2008-11-01

    In the multibunch plasma wakefield accelerator, a train of electron bunches is utilized to excite a high gradient wakefield in a plasma which can be sampled by a trailing short witness bunch. We show that for five drive bunches with 150 pC total charge which can be generated in the Accelerator Test Facility of the Brookhaven National Lab, a wakefield of 140 MV/m can be generated if the plasma density is matched to the bunch train period. In addition, the possibility of ramping the charge per bunch in order to achieve high transformer ratios (>5) is examined, a scenario that is of great interest for a future afterburner collider. The work was supported by the US Department of Energy.

  4. Particle Acceleration in the Low Corona Over Broad Longitudes: Coupling MHD and 3D Particle Simulations

    NASA Astrophysics Data System (ADS)

    Gorby, M.; Schwadron, N.; Torok, T.; Downs, C.; Lionello, R.; Linker, J.; Titov, V. S.; Mikic, Z.; Riley, P.; Desai, M. I.; Dayeh, M. A.

    2014-12-01

    Recent work on the coupling between the Energetic Particle Radiation Environment Module (EPREM, a 3D energetic particle model) and Magnetohydrodynamics Around a Sphere (MAS, an MHD code developed at Predictive Science, Inc.) has demonstrated the efficacy of compression regions around fast coronal mass ejections (CMEs) for particle acceleration low in the corona (˜ 3 - 6 solar radii). These couplings show rapid particle acceleration over a broad longitudinal extent (˜ 80 degrees) resulting from the pile-up of magnetic flux in the compression regions and their subsequent expansion. The challenge for forming large SEP events in such compression-acceleration scenarios is to have enhanced scattering within the acceleration region while also allowing for efficient escape of accelerated particles downstream (away from the Sun) from the compression region. We present here the most recent simulation results including energetic particle and CME plasma profiles, the subsequent flux and dosages at 1AU, and an analysis of the compressional regions as efficient accelerators.

  5. Simulation of Laser Wake Field Acceleration using a 2.5D PIC Code

    NASA Astrophysics Data System (ADS)

    An, W. M.; Hua, J. F.; Huang, W. H.; Tang, Ch. X.; Lin, Y. Z.

    2006-11-01

    A 2.5D PIC simulation code is developed to study the LWFA( Laser WakeField Acceleration ). The electron self-injection and the generation of mono-energetic electron beam in LWFA is briefly discussed through the simulation. And the experiment of this year at SILEX-I laser facility is also introduced.

  6. 3-D RPIC simulations of relativistic jets: Particle acceleration, magnetic field generation, and emission

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.

    2006-01-01

    Nonthermal radiation observed from astrophysical systems containing (relativistic) jets and shocks, e.g., supernova remnants, active galactic nuclei (AGNs), gamma-ray bursts (GRBs), and Galactic microquasar systems usually have power-law emission spectra. Fermi acceleration is the mechanism usually assumed for the acceleration of particles in astrophysical environments. Recent PIC simulations using injected relativistic electron-ion (electro-positron) jets show that acceleration occurs within the downstream jet, rather than by the scattering of particles back and forth across the shock as in Fermi acceleration. Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, other two-streaming instability, and the Weibel instability) created in the .shocks are responsible for particle (electron, positron, and ion) acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection behind the jet head. The "jitter" radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants. We will review recent PIC simulations which show particle acceleration in jets.

  7. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    SciTech Connect

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  8. ELECTROMAGNETIC AND THERMAL SIMULATIONS FOR THE SWITCH REGION OF A COMPACT PROTON ACCELERATOR

    SciTech Connect

    Wang, L; Caporaso, G J; Sullivan, J S

    2007-06-15

    A compact proton accelerator for medical applications is being developed at Lawrence Livermore National Laboratory. The accelerator architecture is based on the dielectric wall accelerator (DWA) concept. One critical area to consider is the switch region. Electric field simulations and thermal calculations of the switch area were performed to help determine the operating limits of rmed SiC switches. Different geometries were considered for the field simulation including the shape of the thin Indium solder meniscus between the electrodes and SiC. Electric field simulations were also utilized to demonstrate how the field stress could be reduced. Both transient and steady steady-state thermal simulations were analyzed to find the average power capability of the switches.

  9. Beam dynamics and wakefield simulations of the double grating accelerating structure

    SciTech Connect

    Najafabadi, B. Montazeri; Byer, R. L.; Ng, C. K.; England, R. J.; Peralta, E. A.; Soong, K.; Noble, R.; Wu, Z.

    2012-12-21

    Laser-driven acceleration in dielectric structures can provide gradients on the order of GeV/m. The small transverse dimension and tiny feature sizes introduce challenges in design, fabrication, and simulation studies of these structures. In this paper we present the results of beam dynamic simulation and short range longitudinal wakefield simulation of the double grating structure. We show the linear trend of acceleration in a dielectric accelerator design and calculate the maximum achievable gradient equal to 0.47E{sub 0} where E0 is maximum electric field of the laser excitation. On the other hand, using wakefield simulations, we show that the loss factor of the structure with 400nm gap size will be 0.12GV/m for a 10fC, 100as electron bunch which is an order of magnitude less than expected gradient near damage threshold of the device.

  10. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGESBeta

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  11. Simulation of the Focal Spot of the Accelerator Bremsstrahlung Radiation

    NASA Astrophysics Data System (ADS)

    Sorokin, V.; Bespalov, V.

    2016-06-01

    Testing of thick-walled objects by bremsstrahlung radiation (BR) is primarily performed via high-energy quanta. The testing parameters are specified by the focal spot size of the high-energy bremsstrahlung radiation. In determining the focal spot size, the high- energy BR portion cannot be experimentally separated from the low-energy BR to use high- energy quanta only. The patterns of BR focal spot formation have been investigated via statistical modeling of the radiation transfer in the target material. The distributions of BR quanta emitted by the target for different energies and emission angles under normal distribution of the accelerated electrons bombarding the target have been obtained, and the ratio of the distribution parameters has been determined.

  12. Simulation Studies of the Dielectric Grating as an Accelerating and Focusing Structure

    SciTech Connect

    Soong, Ken; Peralta, E.A.; Byer, R.L.; Colby, E.; /SLAC

    2011-08-12

    A grating-based design is a promising candidate for a laser-driven dielectric accelerator. Through simulations, we show the merits of a readily fabricated grating structure as an accelerating component. Additionally, we show that with a small design perturbation, the accelerating component can be converted into a focusing structure. The understanding of these two components is critical in the successful development of any complete accelerator. The concept of accelerating electrons with the tremendous electric fields found in lasers has been proposed for decades. However, until recently the realization of such an accelerator was not technologically feasible. Recent advances in the semiconductor industry, as well as advances in laser technology, have now made laser-driven dielectric accelerators imminent. The grating-based accelerator is one proposed design for a dielectric laser-driven accelerator. This design, which was introduced by Plettner, consists of a pair of opposing transparent binary gratings, illustrated in Fig. 1. The teeth of the gratings serve as a phase mask, ensuring a phase synchronicity between the electromagnetic field and the moving particles. The current grating accelerator design has the drive laser incident perpendicular to the substrate, which poses a laser-structure alignment complication. The next iteration of grating structure fabrication seeks to monolithically create an array of grating structures by etching the grating's vacuum channel into a fused silica wafer. With this method it is possible to have the drive laser confined to the plane of the wafer, thus ensuring alignment of the laser-and-structure, the two grating halves, and subsequent accelerator components. There has been previous work using 2-dimensional finite difference time domain (2D-FDTD) calculations to evaluate the performance of the grating accelerator structure. However, this work approximates the grating as an infinite structure and does not accurately model a

  13. Modeling laser wakefield accelerator experiments with ultrafast particle-in-cell simulations in boosted frames

    SciTech Connect

    Martins, S. F.; Fonseca, R. A.; Vieira, J.; Silva, L. O.

    2010-05-15

    The development of new laser systems at the 10 Petawatt range will push laser wakefield accelerators to novel regimes, for which theoretical scalings predict the possibility to accelerate electron bunches up to tens of GeVs in meter-scale plasmas. Numerical simulations will play a crucial role in testing, probing, and optimizing the physical parameters and the setup of future experiments. Fully kinetic simulations are computationally very demanding, pushing the limits of today's supercomputers. In this paper, the recent developments in the OSIRIS framework [R. A. Fonseca et al., Lect. Notes Comput. Sci. 2331, 342 (2002)] are described, in particular the boosted frame scheme, which leads to a dramatic change in the computational resources required to model laser wakefield accelerators. Results from one-to-one modeling of the next generation of laser systems are discussed, including the confirmation of electron bunch acceleration to the energy frontier.

  14. Accelerated modeling and simulation with a desktop supercomputer

    NASA Astrophysics Data System (ADS)

    Kelmelis, Eric J.; Humphrey, John R.; Durbano, James P.; Ortiz, Fernando E.

    2006-05-01

    The performance of modeling and simulation tools is inherently tied to the platform on which they are implemented. In most cases, this platform is a microprocessor, either in a desktop PC, PC cluster, or supercomputer. Microprocessors are used because of their familiarity to developers, not necessarily their applicability to the problems of interest. We have developed the underlying techniques and technologies to produce supercomputer performance from a standard desktop workstation for modeling and simulation applications. This is accomplished through the combined use of graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and standard microprocessors. Each of these platforms has unique strengths and weaknesses but, when used in concert, can rival the computational power of a high-performance computer (HPC). By adding a powerful GPU and our custom designed FPGA card to a commodity desktop PC, we have created simulation tools capable of replacing massive computer clusters with a single workstation. We present this work in its initial embodiment: simulators for electromagnetic wave propagation and interaction. We discuss the trade-offs of each independent technology, GPUs, FPGAs, and microprocessors, and how we efficiently partition algorithms to take advantage of the strengths of each while masking their weaknesses. We conclude by discussing enhancing the computational performance of the underlying desktop supercomputer and extending it to other application areas.

  15. Accelerating large cardiac bidomain simulations by arnoldi preconditioning.

    PubMed

    Deo, Makarand; Bauer, Steffen; Plank, Gernot; Vigmond, Edward

    2006-01-01

    Bidomain simulations of cardiac systems often in volve solving large, sparse, linear systems of the form Ax=b. These simulations are computationally very expensive in terms of run time and memory requirements. Therefore, efficient solvers are essential to keep simulations tractable. In this paper, an efficient preconditioner for the conjugate gradient (CG) method based on system order reduction using the Arnoldi method (A-PCG) is explained. Large order systems generated during cardiac bidomain simulations using a finite element method formulation, are solved using the A-PCG method. Its performance is compared with incomplete LU (ILU) preconditioning. Results indicate that the A-PCG estimates an approximate solution considerably faster than the ILU, often within a single iteration. To reduce the computational demands in terms of memory and run time, the use of a cascaded preconditioner is suggested. The A-PCG can be applied to quickly obtain an approximate solution, subsequently a cheap iterative method such as successive overrelaxation (SOR) is applied to further refine the solution to arrive at a desired accuracy. The memory requirements are less than direct LU but more than ILU method. The proposed scheme is shown to yield significant speedups when solving time evolving systems. PMID:17946209

  16. Beam dynamics simulations of post low energy beam transport section in RAON heavy ion accelerator

    NASA Astrophysics Data System (ADS)

    Jin, Hyunchang; Jang, Ji-Ho; Jang, Hyojae; Hong, In-Seok

    2016-02-01

    RAON (Rare isotope Accelerator Of Newness) heavy ion accelerator of the rare isotope science project in Daejeon, Korea, has been designed to accelerate multiple-charge-state beams to be used for various science programs. In the RAON accelerator, the rare isotope beams which are generated by an isotope separation on-line system with a wide range of nuclei and charges will be transported through the post Low Energy Beam Transport (LEBT) section to the Radio Frequency Quadrupole (RFQ). In order to transport many kinds of rare isotope beams stably to the RFQ, the post LEBT should be devised to satisfy the requirement of the RFQ at the end of post LEBT, simultaneously with the twiss parameters small. We will present the recent lattice design of the post LEBT in the RAON accelerator and the results of the beam dynamics simulations from it. In addition, the error analysis and correction in the post LEBT will be also described.

  17. Three-dimensional simulation analysis of the standing-wave free- electron laser two beam accelerator

    SciTech Connect

    Wang, C.; Sessler, A.

    1993-01-01

    We have modified a two-dimensional relativistic klystron code, developed by Ryne and Yu, to simulate both the standing-wave free- electron laser two-beam accelerator and the relativistic klystron two- beam accelerator. In this paper, the code is used to study a standing-wave free-electron laser with three cavities. The effect of the radius of the electron beam on the RF output power; namely, a three-dimensional effect is examined.

  18. 3D Simulations for a Micron-Scale, Dielectric-Based Acceleration Experiment

    SciTech Connect

    Yoder, R. B.; Travish, G.; Xu Jin; Rosenzweig, J. B.

    2009-01-22

    An experimental program to demonstrate a dielectric, slab-symmetric accelerator structure has been underway for the past two years. These resonant devices are driven by a side-coupled 800-nm laser and can be configured to maintain the field profile necessary for synchronous acceleration and focusing of relativistic or nonrelativistic particles. We present 3D simulations of various versions of the structure geometry, including a metal-walled structure relevant to ongoing cold tests on resonant properties, and an all-dielectric structure to be constructed for a proof-of-principle acceleration experiment.

  19. Microparticle accelerator of unique design. [for micrometeoroid impact and cratering simulation

    NASA Technical Reports Server (NTRS)

    Vedder, J. F.

    1978-01-01

    A microparticle accelerator has been devised for micrometeoroid impact and cratering simulation; the device produces high-velocity (0.5-15 km/sec), micrometer-sized projectiles of any cohesive material. In the source, an electrodynamic levitator, single particles are charged by ion bombardment in high vacuum. The vertical accelerator has four drift tubes, each initially at a high negative voltage. After injection of the projectile, each tube is grounded in turn at a time determined by the voltage and charge/mass ratio to give four acceleration stages with a total voltage equivalent to about 1.7 MV.

  20. Mainstreaming Modeling and Simulation to Accelerate Public Health Innovation

    PubMed Central

    Sepulveda, Martin-J.; Mabry, Patricia L.

    2014-01-01

    Dynamic modeling and simulation are systems science tools that examine behaviors and outcomes resulting from interactions among multiple system components over time. Although there are excellent examples of their application, they have not been adopted as mainstream tools in population health planning and policymaking. Impediments to their use include the legacy and ease of use of statistical approaches that produce estimates with confidence intervals, the difficulty of multidisciplinary collaboration for modeling and simulation, systems scientists’ inability to communicate effectively the added value of the tools, and low funding for population health systems science. Proposed remedies include aggregation of diverse data sets, systems science training for public health and other health professionals, changing research incentives toward collaboration, and increased funding for population health systems science projects. PMID:24832426

  1. Accelerating virtual surgery simulation for congenital aural atresia

    NASA Astrophysics Data System (ADS)

    Li, Bin; Wang, Zigang; Smouha, Eric; Chen, Dongqing; Liang, Zhengrong

    2004-05-01

    In this paper, we proposed a new efficient implementation for simulation of surgery planning for congenital aural atresia. We first applied a 2-level image segmentation schema to classify the inner ear structures. Based on it, several 3D texture volumes were generated and sent to graphical pipeline on a PC platform. By exploiting the texturingmapping capability on the PC graphics/video board, a 3D image was created with high quality showing the accurate spatial relationships of the complex surgical anatomy of congenitally atretic ears. Furthermore, we exploited the graphics hardware-supported per-fragment function to perform the geometric clipping on 3D volume data to interactively simulate the procedure of surgical operation. The result was very encouraging.

  2. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  3. Investigation on Accelerating Dust Storm Simulation via Domain Decomposition Methods

    NASA Astrophysics Data System (ADS)

    Yu, M.; Gui, Z.; Yang, C. P.; Xia, J.; Chen, S.

    2014-12-01

    Dust storm simulation is a data and computing intensive process, which requires high efficiency and adequate computing resources. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. However, it is still a question worthy of consideration that how to allocate these subdomain processes into computing nodes without introducing imbalanced task loads and unnecessary communications among computing nodes. Here we propose a domain decomposition and allocation framework that can carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. The framework is tested in the NMM (Nonhydrostatic Mesoscale Model)-dust model, where a 72-hour processes of the dust load are simulated. Performance result using the proposed scheduling method is compared with the one using default scheduling methods of MPI. Results demonstrate that the system improves the performance of simulation by 20% up to 80%.

  4. Modern Simulation and Optimization Tools for Non-Scaling FFAGs and Related Accelerators

    SciTech Connect

    C. Johnstone; M. Berz; K. Makino

    2010-11-04

    With the U.S. experimental effort in HEP largely located at laboratories supporting the operations of large, highly specialized accelerators, the understanding and prediction of high energy particle accelerators becomes critical to the overall success of the DOE HEP program. One area in which small businesses can contribute to the ongoing success of the U.S. program in HEP is through innovations in computer techniques and sophistication in the modeling of high-energy accelerators. A specific newly identified problem lies in the simulation and optimization of FFAGs and related devices, for which currently available tools originally developed for other purposes provide only approximate and inefficient simulation. We propose to develop a set of tools for this purpose based on modern techniques and simulation approaches.

  5. Acceleration of heavy and light particles in turbulence: Comparison between experiments and direct numerical simulations

    NASA Astrophysics Data System (ADS)

    Volk, R.; Calzavarini, E.; Verhille, G.; Lohse, D.; Mordant, N.; Pinton, J.-F.; Toschi, F.

    2008-08-01

    We compare experimental data and numerical simulations for the dynamics of inertial particles with finite density in turbulence. In the experiment, bubbles and solid particles are optically tracked in a turbulent flow of water using an Extended Laser Doppler Velocimetry technique. The probability density functions (PDF) of particle accelerations and their auto-correlation in time are computed. Numerical results are obtained from a direct numerical simulation in which a suspension of passive pointwise particles is tracked, with the same finite density and the same response time as in the experiment. We observe a good agreement for both the variance of acceleration and the autocorrelation time scale of the dynamics; small discrepancies on the shape of the acceleration PDF are observed. We discuss the effects induced by the finite size of the particles, not taken into account in the present numerical simulations.

  6. Particle in cell simulation of laser-accelerated proton beams for radiation therapy.

    PubMed

    Fourkal, E; Shahine, B; Ding, M; Li, J S; Tajima, T; Ma, C M

    2002-12-01

    In this article we present the results of particle in cell (PIC) simulations of laser plasma interaction for proton acceleration for radiation therapy treatments. We show that under optimal interaction conditions protons can be accelerated up to relativistic energies of 300 MeV by a petawatt laser field. The proton acceleration is due to the dragging Coulomb force arising from charge separation induced by the ponderomotive pressure (light pressure) of high-intensity laser. The proton energy and phase space distribution functions obtained from the PIC simulations are used in the calculations of dose distributions using the GEANT Monte Carlo simulation code. Because of the broad energy and angular spectra of the protons, a compact particle selection and beam collimation system will be needed to generate small beams of polyenergetic protons for intensity modulated proton therapy. PMID:12512712

  7. Modern approaches to accelerator simulation and on-line control

    SciTech Connect

    Lee, M.; Clearwater, S.; Theil, E.; Paxson, V.

    1987-02-01

    COMFORT-PLUS consists of three parts: (1) COMFORT (Control Of Machine Function, ORbits, and Trajectories), which computes the machine lattice functions and transport matrices along a beamline; (2) PLUS (Prediction from Lattice Using Simulation) which finds or compensates for errors in the beam parameters or machine elements; and (3) a highly graphical interface to PLUS. The COMFORT-PLUS package has been developed on a SUN-3 workstation. The structure and use of COMFORT-PLUS are described, and an example of the use of the package is presented. (LEW)

  8. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    SciTech Connect

    Paul, Kevin; Huang, C.; Bruhwiler, D.L.; Mori, W.B.; Tsung, F.S.; Cormier-Michel, E.; Geddes, C.G.R.; Cowan, B.; Cary, J.R.; Esarey, E.; Fonseca, R.A.; Martins, S.F.; Silva, L.O.

    2008-09-08

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a0 and that full and reduced PIC agree well for values of a0 approaching 4.

  9. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    SciTech Connect

    Paul, K.; Bruhwiler, D. L.; Cowan, B.; Cary, J. R.; Huang, C.; Mori, W. B.; Tsung, F. S.; Cormier-Michel, E.; Geddes, C. G. R.; Esarey, E.; Fonseca, R. A.; Martins, S. F.; Silva, L. O.

    2009-01-22

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a{sub 0} and that full and reduced PIC agree well for values of a{sub 0} approaching 4.

  10. A comparison of acceleration control and pulse control in simulated spacecraft docking maneuvers

    NASA Technical Reports Server (NTRS)

    Brody, Adam R.; Ellis, Stephen R.

    1991-01-01

    Results are reported from a study designed to compare acceleration control with pulse control in simulated spacecraft docking maneuvers. Nine commercial airline pilots served as test subjects and the simulated remote dockings of an orbital maneuvering vehicle (OMV) to a space station were initiated from 50, 100, and 150 meters along the station's minus velocity vector. The trials were grouped into blocks of 18 consisting of six repetitions of the three ranges. It was found that mission duration was lower with pulse mode, while fuel consumption was lower with acceleration mode. It is suggested that this result is most likely specific to the thruster values that are being used.

  11. MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes

    SciTech Connect

    Carlone, M; Lamey, M; Anderson, R; MacPherson, M

    2014-06-15

    Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of an electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.

  12. Molecular dynamics simulations of GPCR-cholesterol interaction: An emerging paradigm.

    PubMed

    Sengupta, Durba; Chattopadhyay, Amitabha

    2015-09-01

    G protein-coupled receptors (GPCRs) are the largest class of molecules involved in signal transduction across cell membranes and represent major targets in the development of novel drug candidates. Membrane cholesterol plays an important role in GPCR structure and function. Molecular dynamics simulations have been successful in exploring the effect of cholesterol on the receptor and a general consensus molecular view is emerging. We review here recent molecular dynamics studies at multiple resolutions highlighting the main features of cholesterol-GPCR interaction. Several cholesterol interaction sites have been identified on the receptor that are reminiscent of nonannular sites. These cholesterol hot-spots are highly dynamic and have a microsecond time scale of exchange with the bulk lipids. A few consensus sites (such as the CRAC site) have been identified that correspond to higher cholesterol interaction. Interestingly, high plasticity is observed in the modes of cholesterol interaction and several sites have been suggested to have high cholesterol occupancy. We therefore believe that these cholesterol hot-spots are indicative of 'high occupancy sites' rather than 'binding sites'. The results suggest that the energy landscape of cholesterol association with GPCRs corresponds to a series of shallow minima interconnected by low barriers. These specific interactions, along with general membrane effects, have been observed to modulate GPCR organization. Membrane cholesterol effects on receptor structure and organization, that in turn influences receptor cross-talk and drug efficacy, represent a new frontier in GPCR research. This article is part of a Special Issue entitled: Lipid-protein interactions. Guest Editors: Amitabha Chattopadhyay and Jean-Marie Ruysschaert. PMID:25817549

  13. The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations

    SciTech Connect

    Ashour-Abdalla, Maha

    2011-01-04

    To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R{sub E} to 23 R{sub E} in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R{sub E} and x = -15 R{sub E}. In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x {approx}-10 R{sub E}) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.

  14. A graph model, ParaDiGM, and a software tool, VISA, for the representation, design, and simulation of parallel, distributed computations

    SciTech Connect

    Demeure, I.M.

    1989-01-01

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computation into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.

  15. SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator

    SciTech Connect

    Uselmann, A; Mackie, T

    2014-06-01

    Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated in order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at

  16. Monte Carlo Simulations of Nonlinear Particle Acceleration in Parallel Trans-relativistic Shocks

    NASA Astrophysics Data System (ADS)

    Ellison, Donald C.; Warren, Donald C.; Bykov, Andrei M.

    2013-10-01

    We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and "smooth" the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor γ0 = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.

  17. MONTE CARLO SIMULATIONS OF NONLINEAR PARTICLE ACCELERATION IN PARALLEL TRANS-RELATIVISTIC SHOCKS

    SciTech Connect

    Ellison, Donald C.; Warren, Donald C.; Bykov, Andrei M. E-mail: ambykov@yahoo.com

    2013-10-10

    We present results from a Monte Carlo simulation of a parallel collisionless shock undergoing particle acceleration. Our simulation, which contains parameterized scattering and a particular thermal leakage injection model, calculates the feedback between accelerated particles ahead of the shock, which influence the shock precursor and 'smooth' the shock, and thermal particle injection. We show that there is a transition between nonrelativistic shocks, where the acceleration efficiency can be extremely high and the nonlinear compression ratio can be substantially greater than the Rankine-Hugoniot value, and fully relativistic shocks, where diffusive shock acceleration is less efficient and the compression ratio remains at the Rankine-Hugoniot value. This transition occurs in the trans-relativistic regime and, for the particular parameters we use, occurs around a shock Lorentz factor γ{sub 0} = 1.5. We also find that nonlinear shock smoothing dramatically reduces the acceleration efficiency presumed to occur with large-angle scattering in ultra-relativistic shocks. Our ability to seamlessly treat the transition from ultra-relativistic to trans-relativistic to nonrelativistic shocks may be important for evolving relativistic systems, such as gamma-ray bursts and Type Ibc supernovae. We expect a substantial evolution of shock accelerated spectra during this transition from soft early on to much harder when the blast-wave shock becomes nonrelativistic.

  18. Particle-in-cell simulations of plasma accelerators and electron-neutral collisions

    SciTech Connect

    Bruhwiler, David L.; Giacone, Rodolfo E.; Cary, John R.; Verboncoeur, John P.; Mardahl, Peter; Esarey, Eric; Leemans, W.P.; Shadwick, B.A.

    2001-10-01

    We present 2-D simulations of both beam-driven and laser-driven plasma wakefield accelerators, using the object-oriented particle-in-cell code XOOPIC, which is time explicit, fully electromagnetic, and capable of running on massively parallel supercomputers. Simulations of laser-driven wakefields with low ({approx}10{sup 16} W/cm{sup 2}) and high ({approx}10{sup 18} W/cm{sup 2}) peak intensity laser pulses are conducted in slab geometry, showing agreement with theory and fluid simulations. Simulations of the E-157 beam wakefield experiment at the Stanford Linear Accelerator Center, in which a 30 GeV electron beam passes through 1 m of preionized lithium plasma, are conducted in cylindrical geometry, obtaining good agreement with previous work. We briefly describe some of the more significant modifications of XOOPIC required by this work, and summarize the issues relevant to modeling relativistic electron-neutral collisions in a particle-in-cell code.

  19. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    SciTech Connect

    Klein, Steven Karl; Day, Christy M.; Determan, John C.

    2015-09-14

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  20. Accelerated finite element elastodynamic simulations using the GPU

    SciTech Connect

    Huthwaite, Peter

    2014-01-15

    An approach is developed to perform explicit time domain finite element simulations of elastodynamic problems on the graphical processing unit, using Nvidia's CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy’ partitioner and a new, more efficient ‘aligned’ partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The software is applied to three models from the fields of non-destructive testing, vibrations and geophysics, demonstrating a memory bandwidth of very close to the card's maximum, reflecting the bandwidth-limited nature of the algorithm. Comparison with Abaqus, a widely used commercial CPU equivalent, validated the accuracy of the results and demonstrated a speed improvement of around two orders of magnitude. A software package, Pogo, incorporating these developments, is released open source, downloadable from (http://www.pogo-fea.com/) to benefit the community. -- Highlights: •A novel memory arrangement approach is discussed for finite elements on the GPU. •The mesh is partitioned then nodes are arranged efficiently within each partition. •Models from ultrasonics, vibrations and geophysics are run. •The code is significantly faster than an equivalent commercial CPU package. •Pogo, the new software package, is released open source.

  1. Multi-GPU Accelerated Simulation of Dynamically Evolving Fluid Pathways

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Moulas, Evangelos; Simon, Nina S. C.; Podladchikov, Yuri

    2014-05-01

    Fluid flow in porous rocks, both naturally occurring and caused by reservoir operations, mostly takes place along localized high permeability pathways. Pervasive flooding of the rock matrix is rarely observed, in particular for low permeability rocks. The pathways appear to form dynamically in response to the fluid flow itself; the amount of pathways, their location and their hydraulic conductivity may change in time. We propose a physically and thermodynamically consistent model that describes the formation and evolution of fluid pathways. The model consists of a system of equations describing poro-elasto-viscous deformation and flow. We have implemented the strongly coupled equations into a numerical model. Nonlinearity of the solid rheology is also taken into account. We have developed a fully three-dimensional numerical MATLAB application based on an iterative finite difference scheme. We have ported it to C-CUDA using MPI to run it on multi-GPU clusters. Numerical tuning of the application based on memory bandwidth throughput allows to approach hardware peak performance. Conducted high-resolution three-dimensional simulations predict the formation of dynamically evolving high porosity and permeability pathways as a natural outcome of porous flow coupled with rock deformation.

  2. Accelerated finite element elastodynamic simulations using the GPU

    NASA Astrophysics Data System (ADS)

    Huthwaite, Peter

    2014-01-01

    An approach is developed to perform explicit time domain finite element simulations of elastodynamic problems on the graphical processing unit, using Nvidia's CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy' partitioner and a new, more efficient ‘aligned' partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The software is applied to three models from the fields of non-destructive testing, vibrations and geophysics, demonstrating a memory bandwidth of very close to the card's maximum, reflecting the bandwidth-limited nature of the algorithm. Comparison with Abaqus, a widely used commercial CPU equivalent, validated the accuracy of the results and demonstrated a speed improvement of around two orders of magnitude. A software package, Pogo, incorporating these developments, is released open source, downloadable from http://www.pogo-fea.com/ to benefit the community.

  3. Automated detection and analysis of particle beams in laser-plasma accelerator simulations

    SciTech Connect

    Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.; Bethel, E. Wes; Jacobsen, J.; Prabhat, ,; R.ubel, O.; Weber, G,; Hamann, B.

    2010-05-21

    Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread in momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for

  4. Numerical Simulations for the Cool-Down of the XFEL and TTF Superconducting Linear Accelerators

    SciTech Connect

    Jensch, K.; Lange, R.; Petersen, B.

    2004-06-23

    The alignment of the superconducting RF-cavities and the magnet packages of the cryomodules of the future XFEL linear accelerator and the existing TTF linear accelerator at DESY can be affected by the mechanical stress caused by thermal gradients during the cool-down and warm-up. Also the design of the XFEL cryogenic system has to include the cool-down and warm-up procedures. An object-oriented software concept is applied to analyze the cool-down procedures for the TTF and the XFEL linear accelerators by numerical simulations. The numerical results are compared to measurements taken during the first cool-down of the TTF linear accelerator. Some results for the XFEL cryogenic system are presented.

  5. 3-D RPIC Simulations of Relativistic Jets: Particle Acceleration, Magnetic Field Generation, and Emission

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Mizuno, Y.; Hardee, P.; Hededal, C. B.; Fishman, G. J.

    2006-01-01

    Recent PIC simulations using injected relativistic electron-ion (electro-positron) jets into ambient plasmas show that acceleration occurs in relativistic shocks. The Weibel instability created in shocks is responsible for particle acceleration, and generation and amplification of highly inhomogeneous, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection in relativistic jets. The "jitter" radiation from deflected electrons has different properties than the synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understand the complex time evolution and spectral structure in relativistic jets and gamma-ray bursts. We will present recent PIC simulations which show particle acceleration and magnetic field generation. We will also calculate associated self-consistent emission from relativistic shocks.

  6. Simulation studies of acceleration of heavy ions and their elemental compositions; IFSR--755

    SciTech Connect

    Toida, Mieko; Ohsawa, Yukiharu

    1996-07-01

    By using a one-dimensional, electromagnetic particle simulation code with full ion and electron dynamics, we have studied the acceleration of heavy ions by a nonlinear magnetosonic wave in a multi-ion-species plasma. First, we describe the mechanism of heavy ion acceleration by magnetosonic waves. We then investigate this by particle simulations. The simulation plasma contains four ion species: H, He, O, and Fe. The number density of He is taken to be 10% of that of H, and those of O and Fe are much lower. Simulations confirm that, as in a single-ion-species plasma, some of the hydrogens can be accelerated by the longitudinal electric field formed in the wave. Furthermore, they show that magnetosonic waves can accelerate all the particles of all the heavy species (He, O, and Fe) by a different mechanism, i.e., by the transverse electric field. The maximum speeds of the heavy species are about the same, of the order of the wave propagation speed. These are in good agreement with theoretical prediction. These results indicate that, if high-energy ions are produced in the solar corona through these mechanisms, the elemental compositions of these heavy ions can be similar to that of the background plasma, i.e., the corona.

  7. Phantom-GRAPE: SIMD accelerated numerical library for N-body simulations

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Nitadori, Keigo; Okamoto, Takashi

    2012-09-01

    Phantom-GRAPE is a numerical software library to accelerate collisionless N-body simulation with SIMD instruction set on x86 architecture. The Newton's forces and also central forces with an arbitrary shape f(r), which have a finite cutoff radius r_cut (i.e. f(r)=0 at r>r_cut), can be quickly computed.

  8. 3D simulations of pre-ionized and two-stage ionization injected laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Davidson, Asher; Zheng, Ming; Lu, Wei; Xu, Xinlu; Joshi, Chang; Silva, Luis O.; Martins, Joana; Fonseca, Ricardo; Mori, Warren B.

    2012-12-01

    In plasma based accelerators (LWFA and PWFA), the methods of injecting high quality electron bunches into the accelerating wakefield is of utmost importance for various applications. To fully understand the numerical effect of simulating the trapping process, numerous numerical convergence tests were performed to ensure the correctness of preionized simulations which confirm the physical picture first proposed in [1]. We Further investigate the use of a two-stage ionization injected LWFA to achieve high quality monoenergetic beams through the use of 3D PIC simulations. The first stage constitutes the Injection Regime, which is 99.5% He and 0.5% N, while the second stage constitutes the Acceleration Regime, which is entirely composed of He. Two of the simulations model the parameters of the LWFA experiments for the LLNL Callisto laser, at laser powers of 90 and 100TW. energies as high as 680MeV were observed in the 90TW simulation, and those as high as 1.44GeV were observed in the 100TW simulation. The affect of the matching condition of the spot size in this LWFA is discussed.

  9. Monte Carlo radiotherapy simulations of accelerated repopulation and reoxygenation for hypoxic head and neck cancer

    PubMed Central

    Harriss-Phillips, W M; Bezak, E; Yeoh, E K

    2011-01-01

    Objective A temporal Monte Carlo tumour growth and radiotherapy effect model (HYP-RT) simulating hypoxia in head and neck cancer has been developed and used to analyse parameters influencing cell kill during conventionally fractionated radiotherapy. The model was designed to simulate individual cell division up to 108 cells, while incorporating radiobiological effects, including accelerated repopulation and reoxygenation during treatment. Method Reoxygenation of hypoxic tumours has been modelled using randomised increments of oxygen to tumour cells after each treatment fraction. The process of accelerated repopulation has been modelled by increasing the symmetrical stem cell division probability. Both phenomena were onset immediately or after a number of weeks of simulated treatment. Results The extra dose required to control (total cell kill) hypoxic vs oxic tumours was 15–25% (8–20 Gy for 5×2 Gy per week) depending on the timing of accelerated repopulation onset. Reoxygenation of hypoxic tumours resulted in resensitisation and reduction in total dose required by approximately 10%, depending on the time of onset. When modelled simultaneously, accelerated repopulation and reoxygenation affected cell kill in hypoxic tumours in a similar manner to when the phenomena were modelled individually; however, the degree was altered, with non-additive results. Simulation results were in good agreement with standard linear quadratic theory; however, differed for more complex comparisons where hypoxia, reoxygenation as well as accelerated repopulation effects were considered. Conclusion Simulations have quantitatively confirmed the need for patient individualisation in radiotherapy for hypoxic head and neck tumours, and have shown the benefits of modelling complex and dynamic processes using Monte Carlo methods. PMID:21933980

  10. Laser-wakefield accelerators for medical phase contrast imaging: Monte Carlo simulations and experimental studies

    NASA Astrophysics Data System (ADS)

    Cipiccia, S.; Reboredo, D.; Vittoria, Fabio A.; Welsh, G. H.; Grant, P.; Grant, D. W.; Brunetti, E.; Wiggins, S. M.; Olivo, A.; Jaroszynski, D. A.

    2015-05-01

    X-ray phase contrast imaging (X-PCi) is a very promising method of dramatically enhancing the contrast of X-ray images of microscopic weakly absorbing objects and soft tissue, which may lead to significant advancement in medical imaging with high-resolution and low-dose. The interest in X-PCi is giving rise to a demand for effective simulation methods. Monte Carlo codes have been proved a valuable tool for studying X-PCi including coherent effects. The laser-plasma wakefield accelerators (LWFA) is a very compact particle accelerator that uses plasma as an accelerating medium. Accelerating gradient in excess of 1 GV/cm can be obtained, which makes them over a thousand times more compact than conventional accelerators. LWFA are also sources of brilliant betatron radiation, which are promising for applications including medical imaging. We present a study that explores the potential of LWFA-based betatron sources for medical X-PCi and investigate its resolution limit using numerical simulations based on the FLUKA Monte Carlo code, and present preliminary experimental results.

  11. Three-Dimensional PIC Simulation of Laser-Ion Acceleration from Ultrathin Targets

    NASA Astrophysics Data System (ADS)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Bergen, B.; Hegelich, B. M.; Flippo, K. A.; Fernández, J. C.

    2008-11-01

    One- and two-dimensional particle-in-cell simulations of the Break-Out Afterburner (BOA) [1] show that new ion acceleration regimes emerge when ultraintense, high- contrast lasers impinge on ultrathin (10s of nm) targets. The BOA has now been demonstrated in three-dimensional (3D) simulations with solid-density targets using VPIC [2]. Comparisons of monoenergetic beams, maximum ion energy, and conversion efficiency have been made with 3D VPIC simulations of ion acceleration from high- contrast circularly polarized lasers [3] with identical intensity, spot size and target composition. Studies have been made of BOA for different intensity and target thickness. [1] Yin et al. LPB 24, 1-8 (2006) ; Yin et al. PoP 14, 056706 (2007). [2] Bowers et al., PoP 15, 055703 (2008). [3] Zhang et al., PoP 14, 123108 (2007); Robinson et al., NJP 10 013021 (2008)

  12. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  13. Hybrid PIC Simulations of Particle Dynamics in Coaxial Plasma Jet Accelerators

    NASA Astrophysics Data System (ADS)

    Thoma, Carsten; Hughes, Thomas; Welch, Dale; Hakel, Peter

    2007-11-01

    We describe the results of 1D and 2D simulations of plasma jet accelerators using the particle-in-cell (PIC) code Lsp. Previous studies of 1D cartesian simulations have shown that ion particle dynamics at the plasma-vacuum interface depend critically on the local Hall parameter, which is strongly dependent on electron temperature. In a coaxial accelerator with finite transverse dimensions, large transverse ion motions, predicted at moderate Hall parameters in 1D, can lead to ion loss to the walls. The results of 2D r-z jet simulations are described and compared with the 1D cartesian results. The effects of particle loss and ablation at the wall are considered, as are electron heating mechanisms at the plasma-vacuum interface, including radiation losses. We will apply the results to the plasma jet experiments underway at HyperV Technologies Corp.

  14. Simulation on buildup of electron cloud in a proton circular accelerator

    NASA Astrophysics Data System (ADS)

    Li, Kai-Wei; Liu, Yu-Dong

    2015-10-01

    Electron cloud interaction with high energy positive beams are believed responsible for various undesirable effects such as vacuum degradation, collective beam instability and even beam loss in high power proton circular accelerators. An important uncertainty in predicting electron cloud instability lies in the detailed processes of the generation and accumulation of the electron cloud. The simulation on the build-up of electron cloud is necessary to further studies on beam instability caused by electron clouds. The China Spallation Neutron Source (CSNS) is an intense proton accelerator facility now being built, whose accelerator complex includes two main parts: an H-linac and a rapid cycling synchrotron (RCS). The RCS accumulates the 80 MeV proton beam and accelerates it to 1.6 GeV with a repetition rate of 25 Hz. During beam injection with lower energy, the emerging electron cloud may cause serious instability and beam loss on the vacuum pipe. A simulation code has been developed to simulate the build-up, distribution and density of electron cloud in CSNS/RCS. Supported by National Natural Science Foundation of China (11275221, 11175193)

  15. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    SciTech Connect

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; /SLAC /TechX Corp. /Fermilab

    2008-08-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES).

  16. Numerical simulation of reacting flow in a thermally choked ram accelerator projectile launch system

    NASA Astrophysics Data System (ADS)

    Nusca, Michael J.

    1991-06-01

    CFD solutions for the Navier-Stokes equations are presently applied to a ram-accelerator projectile launcher's reacting and nonreacting turbulent flowfields. The gases in question are a hydrocarbon such as CH4, an oxidizer such as O2, and an inert gas such as N2. Numerical simulations are presented which highlight in-bore flowfield details and allow comparisons with measured launch tube wall pressures and projectile thrust as a function of velocity. The computation results thus obtained are used to ascertain the operational feasibility of a proposed 120-mm-bore ram accelerator system.

  17. Centrifugal acceleration at high altitudes above the polar cap: A Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Abudayyeh, H. A.; Barghouthi, I. A.; Slapak, R.; Nilsson, H.

    2015-08-01

    A Monte Carlo simulation was used to study the outflow of O+ and H+ ions along three flight trajectories above the polar cap up to altitudes of about 15 RE. Barghouthi (2008) developed a model on the basis of altitude and velocity-dependent wave-particle interactions and a radial geomagnetic field which includes the effects of ambipolar electric field and gravitational and mirror forces. In the present work we improve this model to include the effect of the centrifugal force, with the use of relevant boundary conditions. In addition, the magnetic field and flight trajectories, namely, the central polar cap (CPC), nightside polar cap (NPC), and cusp, were calculated using the Tsyganenko T96 model. To simulate wave-particle interactions, the perpendicular velocity diffusion coefficients for O+ ions in each region were determined such that the simulation results fit the observations. For H+ ions, a constant perpendicular velocity diffusion coefficient was assumed for all altitudes in all regions as recommended by Nilsson et al. (2013). The effect of centrifugal acceleration was simulated by considering three values for the ionospheric electric field: 0 (no centrifugal acceleration), 50, and 100 mV/m. It was found that the centrifugal acceleration increases the parallel bulk velocity and decreases the parallel and perpendicular temperatures of both ion species at altitudes above about 4 RE. Centrifugal acceleration also increases the temperature anisotropy at high altitudes. At a given altitude, centrifugal acceleration decreases the density of H+ ions while it increases the density of O+ ions. This implies that with higher centrifugal acceleration more O+ ions overcome the potential barrier. It was also found that aside from two exceptions centrifugal acceleration has the same effect on the velocities of both ions. This implies that the centrifugal acceleration is universal for all particles. The parallel bulk velocities at a given value of ionospheric electric field

  18. Experiments in sensing transient rotational acceleration cues on a flight simulator

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1979-01-01

    Results are presented for two transient motion sensing experiments which were motivated by the identification of an anomalous roll cue (a 'jerk' attributed to an acceleration spike) in a prior investigation of realistic fighter motion simulation. The experimental results suggest the consideration of several issues for motion washout and challenge current sensory system modeling efforts. Although no sensory modeling effort is made it is argued that such models must incorporate the ability to handle transient inputs of short duration (some of which are less than the accepted latency times for sensing), and must represent separate channels for rotational acceleration and velocity sensing.

  19. Diffusive Shock Acceleration Simulations: Comparison with Particle Methods and Bow Shock Measurements

    NASA Astrophysics Data System (ADS)

    Kang, Hyesung; Jones, T. W.

    1995-07-01

    Direct comparisons of diffusive particle acceleration numerical simulations have been made against Monte Carlo and hybrid plasma simulations by Ellison et al. (1993) and against observations at the Earth's bow shock presented by Ellison et al. (1990). Toward this end we have introduced a new numerical scheme for injection of cosmic-ray particles out of the thermal plasma, modeled by way of the diffusive scattering process itself; that is, the diffusion and acceleration across the shock front of particles out of the suprathermal tail of the Maxwellian distribution. Our simulations take two forms. First, we have solved numerically the timedependent diffusion-advection equation for the high-energy (cosmic-ray) protons in one-dimensional quasiparallel shocks. Dynamical feedback between the particles and thermal plasma is included. The proton fluxes on both sides of the shock derived from our method are consistent with those calculated by Ellison et al. (1993). A similar test has compared our methods to published measurements at the Earth's bow shock when the interplanetary magnetic field was almost parallel to the solar wind velocity (Ellison et al. 1990). Again our results are in good agreement. Second, the same shock conditions have been simulated with the two-fluid version of diffusive shock acceleration theory by adopting injection rates and the closure parameters inferred from the diffusion-advection equation calculations. The acceleration efficiency and the shock structure calculated with the two-fluid method are in good agreement with those computed with the diffusion-advection method. Thus, we find that all of these computational methods (diffusion-advection, two-fluid, Monte Carlo, and hybrid) are in substantial agreement on the issues they can simultaneously address, so that the essential physics of diffusive particle acceleration is adequately contained within each. This is despite the fact that each makes what appear to be very different assumptions or

  20. Mean-state acceleration of cloud-resolving models and large eddy simulations

    DOE PAGESBeta

    Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.

    2015-10-29

    In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate themore » evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.« less

  1. Mean-state acceleration of cloud-resolving models and large eddy simulations

    SciTech Connect

    Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.

    2015-10-29

    In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate the evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.

  2. Two-fluid electromagnetic simulations of plasma-jet acceleration with detailed equation-of-state

    SciTech Connect

    Thoma, C.; Welch, D. R.; Clark, R. E.; Bruner, N.; MacFarlane, J. J.; Golovkin, I. E.

    2011-10-15

    We describe a new particle-based two-fluid fully electromagnetic algorithm suitable for modeling high density (n{sub i} {approx} 10{sup 17} cm{sup -3}) and high Mach number laboratory plasma jets. In this parameter regime, traditional particle-in-cell (PIC) techniques are challenging due to electron timescale and lengthscale constraints. In this new approach, an implicit field solve allows the use of large timesteps while an Eulerian particle remap procedure allows simulations to be run with very few particles per cell. Hall physics and charge separation effects are included self-consistently. A detailed equation of state (EOS) model is used to evolve the ion charge state and introduce non-ideal gas behavior. Electron cooling due to radiation emission is included in the model as well. We demonstrate the use of these new algorithms in 1D and 2D Cartesian simulations of railgun (parallel plate) jet accelerators using He and Ar gases. The inclusion of EOS and radiation physics reduces the electron temperature, resulting in higher calculated jet Mach numbers in the simulations. We also introduce a surface physics model for jet accelerators in which a frictional drag along the walls leads to axial spreading of the emerging jet. The simulations demonstrate that high Mach number jets can be produced by railgun accelerators for a variety of applications, including high energy density physics experiments.

  3. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  4. Plasma flow and fast particles in a hypervelocity accelerator - A color presentation. [micrometeoroid simulation

    NASA Technical Reports Server (NTRS)

    Igenbergs, E. B.; Cour-Palais, B.; Fisher, E.; Stehle, O.

    1975-01-01

    A new concept for particle acceleration for micrometeoroid simulation was developed at NASA Marshall Space Flight Center, using a high-density self-luminescent fast plasma flow to accelerate glass beads (with a diameter up to 1.0 mm) to velocities between 15-20 km/sec. After a short introduction to the operation of the hypervelocity range, the eight-converter-camera unit used for the photographs of the plasma flow and the accelerated particles is described. These photographs are obtained with an eight-segment reflecting pyramidal beam splitter. Wratten filters were mounted between the beam splitter and the converter tubes of the cameras. The photographs, which were recorded on black and white film, were used to make the matrices for the dye-color process, which produced the prints shown.

  5. Simulation analysis for effects of bone loss on acceleration tolerance of human lumbar vertebra

    NASA Astrophysics Data System (ADS)

    Ma, Honglei; Zhang, Feng; Zhu, Yu; Xiao, Yanhua; Wazir, Abrar

    2014-02-01

    The purpose of the present study was to analyze and predict the changes in acceleration tolerance of human vertebra as a result of bone loss caused by long-term space flight. A human L3-L4 vertebra FEM model was constructed, in which the cancellous bone was separated, and surrounding ligaments were also taken into account. The simulation results demonstrated that bone loss has more of an effect on the acceleration tolerance in x-direction. The results serve to aid in the creation of new acceleration tolerance standards, ensuring astronauts return home safely after long-term space flight. This study shows that more attention should be focused on the bone degradation of crew members and to create new protective designs for space capsules in the future.

  6. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    SciTech Connect

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  7. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    Energy Science and Technology Software Center (ESTSC)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas.more » At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.« less

  8. Evaluation of a server-client architecture for accelerator modeling and simulation

    SciTech Connect

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; van Zeijts, J.; Witherspoon, S.

    1997-02-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands. {copyright} {ital 1997 American Institute of Physics.}

  9. Evaluation of a server-client architecture for accelerator modeling and simulation

    SciTech Connect

    Bowling, B. A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-02-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands.

  10. Evaluation of a server-client architecture for accelerator modeling and simulation

    SciTech Connect

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-11-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands.

  11. Characteristics of an envelope model for laser-plasma accelerator simulation

    SciTech Connect

    Cowan, Benjamin M.; Bruhwiler, David L.; Cormier-Michel, Estelle; Esarey, Eric; Geddes, Cameron G.R.; Messmer, Peter; Paul, Kevin M.

    2011-01-01

    Simulation of laser-plasma accelerator (LPA) experiments is computationally intensive due to the disparate length scales involved. Current experiments extend hundreds of laser wavelengths transversely and many thousands in the propagation direction, making explicit PIC simulations enormously expensive and requiring massively parallel execution in 3D. Simulating the next generation of LPA experiments is expected to increase the computational requirements yet further, by a factor of 1000. We can substantially improve the performance of LPA simulations by modeling the envelope evolution of the laser field rather than the field itself. This allows for much coarser grids, since we need only resolve the plasma wavelength and not the laser wavelength, and therefore larger timesteps can be used. Thus an envelope model can result in savings of several orders of magnitude in computational resources. By propagating the laser envelope in a Galilean frame moving at the speed of light, dispersive errors can be avoided and simulations over long distances become possible. The primary limitation to this envelope model is when the laser pulse develops large frequency shifts, and thus the slowly-varying envelope assumption is no longer valid. Here we describe the model and its implementation, and show rigorous benchmarks for the algorithm, establishing second-order convergence and correct laser group velocity. We also demonstrate simulations of LPA phenomena such as self-focusing and meter-scale acceleration stages using the model.

  12. Radiation belt electron acceleration during the 17 March 2015 geomagnetic storm: Observations and simulations

    NASA Astrophysics Data System (ADS)

    Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Zhang, X.-J.; Li, J.; Baker, D. N.; Reeves, G. D.; Spence, H. E.; Kletzing, C. A.; Kurth, W. S.; Hospodarsky, G. B.; Blake, J. B.; Fennell, J. F.; Kanekal, S. G.; Angelopoulos, V.; Green, J. C.; Goldstein, J.

    2016-06-01

    Various physical processes are known to cause acceleration, loss, and transport of energetic electrons in the Earth's radiation belts, but their quantitative roles in different time and space need further investigation. During the largest storm over the past decade (17 March 2015), relativistic electrons experienced fairly rapid acceleration up to ~7 MeV within 2 days after an initial substantial dropout, as observed by Van Allen Probes. In the present paper, we evaluate the relative roles of various physical processes during the recovery phase of this large storm using a 3-D diffusion simulation. By quantitatively comparing the observed and simulated electron evolution, we found that chorus plays a critical role in accelerating electrons up to several MeV near the developing peak location and produces characteristic flat-top pitch angle distributions. By only including radial diffusion, the simulation underestimates the observed electron acceleration, while radial diffusion plays an important role in redistributing electrons and potentially accelerates them to even higher energies. Moreover, plasmaspheric hiss is found to provide efficient pitch angle scattering losses for hundreds of keV electrons, while its scattering effect on > 1 MeV electrons is relatively slow. Although an additional loss process is required to fully explain the overestimated electron fluxes at multi-MeV, the combined physical processes of radial diffusion and pitch angle and energy diffusion by chorus and hiss reproduce the observed electron dynamics remarkably well, suggesting that quasi-linear diffusion theory is reasonable to evaluate radiation belt electron dynamics during this big storm.

  13. Simulation of Cosmic Ray Acceleration, Propagation And Interaction in SNR Environment

    SciTech Connect

    Lee, S.H.; Kamae, T.; Ellison, D.C.; /North Carolina State U.

    2007-10-15

    Recent studies of young supernova remnants (SNRs) with Chandra, XMM, Suzaku and HESS have revealed complex morphologies and spectral features of the emission sites. The critical question of the relative importance of the two competing gamma-ray emission mechanisms in SNRs; inverse-Compton scattering by high-energy electrons and pion production by energetic protons, may be resolved by GLAST-LAT. To keep pace with the improved observations, we are developing a 3D model of particle acceleration, diffusion, and interaction in a SNR where broad-band emission from radio to multi-TeV energies, produced by shock accelerated electrons and ions, can be simulated for a given topology of shock fronts, magnetic field, and ISM densities. The 3D model takes as input, the particle spectra predicted by a hydrodynamic simulation of SNR evolution where nonlinear diffusive shock acceleration is coupled to the remnant dynamics. We will present preliminary models of the Galactic Ridge SNR RX J1713-3946 for selected choices of SNR parameters, magnetic field topology, and ISM density distributions. When constrained by broad-band observations, our models should predict the extent of coupling between spectral shape and morphology and provide direct information on the acceleration efficiency of cosmic-ray electrons and ions in SNRs.

  14. Extremely high paw accelerations during paw shake in the cat: A mechanism revealed by computer simulations

    NASA Astrophysics Data System (ADS)

    Klishko, Alexander; Cofer, David; Edwards, Donald; Prilutsky, Boris

    2008-03-01

    Paw shake response is a reflex aimed at removing an irritating stimulus from the paw by imparting to it high periodic accelerations (>10 g). These values seem too high to be produced by distal muscles exclusively. According to Prilutsky et al. (2005), resultant hip moments during paw shake are much greater than distal joint moments, whereas distal joint velocities and accelerations exceed those of the proximal joints. The goal of this study was to examine how proximal hip muscles could contribute to high paw accelerations. Using software AnimatLab, we developed a 2D model of the cat hindlimb consisting of 5 rigid segments with 4 hinge joints and 11 muscles spanning all joints. The muscles were assumed passive except for those crossing the hip. When in simulations the hip muscles were reciprocally activated to periodically flex and extend the hip joint with a typical paw shake frequency of 10 Hz, the hindlimb segments demonstrated motion resembling experimental observations: linear and angular velocities and accelerations of the distal segments exceeded several fold the values of the proximal segments. Simulated paw shake revealed features of a whip-like motion.

  15. Magnetic-island Contraction and Particle Acceleration in Simulated Eruptive Solar Flares

    NASA Astrophysics Data System (ADS)

    Guidoni, S. E.; DeVore, C. R.; Karpen, J. T.; Lynch, B. J.

    2016-03-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission in solar flares is not well understood. Drake et al. proposed a mechanism for accelerating electrons in contracting magnetic islands formed by kinetic reconnection in multi-layered current sheets (CSs). We apply these ideas to sunward-moving flux ropes (2.5D magnetic islands) formed during fast reconnection in a simulated eruptive flare. A simple analytic model is used to calculate the energy gain of particles orbiting the field lines of the contracting magnetic islands in our ultrahigh-resolution 2.5D numerical simulation. We find that the estimated energy gains in a single island range up to a factor of five. This is higher than that found by Drake et al. for islands in the terrestrial magnetosphere and at the heliopause, due to strong plasma compression that occurs at the flare CS. In order to increase their energy by two orders of magnitude and plausibly account for the observed high-energy flare emission, the electrons must visit multiple contracting islands. This mechanism should produce sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each magnetohydrodynamic-scale island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare CS is a promising candidate for electron acceleration in solar eruptions.

  16. Simulation of Cosmic Ray Acceleration, Propagation and Interaction in SNR Environment

    NASA Astrophysics Data System (ADS)

    Lee, S. H.; Kamae, T.; Ellison, D. C.

    2007-07-01

    Recent studies of young supernova remnants (SNRs) with Chandra, XMM, Suzaku and HESS have revealed complex morphologies and spectral features of the emission sites. The critical question of the relative importance of the two competing gamma-ray emission mechanisms in SNRs; inverse-Compton scattering by high-energy electrons and pion production by energetic protons, may be resolved by GLAST-LAT. To keep pace with the improved observations, we are developing a 3D model of particle acceleration, diffusion, and interaction in a SNR where broad-band emission from radio to multi-TeV energies, produced by shock accelerated electrons and ions, can be simulated for a given topology of shock fronts, magnetic field, and ISM densities. The 3D model takes as input, the particle spectra predicted by a hydrodynamic simulation of SNR evolution where nonlinear diffusive shock acceleration is coupled to the remnant dynamics (e.g., Ellison, Decourchelle & Ballet; Ellison & Cassam-Chenai Ellison, Berezhko & Baring). We will present preliminary models of the Galactic Ridge SNR RX J1713-3946 for selected choices of SNR parameters, magnetic field topology, and ISM density distributions. When constrained by broad-band observations, our models should predict the extent of coupling between spectral shape and morphology and provide direct information on the acceleration efficiency of cosmic-ray electrons and ions in SNRs.

  17. Particle acceleration and non-thermal emission in Pulsar Wind Nebulae from relativistic MHD simulations

    NASA Astrophysics Data System (ADS)

    Olmi, B.; Del Zanna, L.; Amato, E.; Bucciantini, N.; Bandiera, R.

    2015-09-01

    Pulsar wind nebulae are among the most powerful particle accelerators in the Galaxy with acceleration efficiencies that reach up to 30% and maximum particle energies in the PeV range. In recent years relativistic axisymmetric MHD models have proven to be excellent tools for describing the physics of such objects, and particularly successful at explaining their high energy morphology, down to very fine details. Nevertheless, some important aspects of the physics of PWNe are still obscure: the mechanism(s) responsible for the acceleration of particles of all energies is (are) still unclear, and the origin of the lowest energy (radio emitting) particles is most mysterious. The correct interpretation of the origin of radio emitting particles is of fundamental importance, as this holds information about the amount of pair production in the pulsar magnetosphere, and hence on the role of pulsars as antimatter factories. On the other hand, the long lifetimes of these particles against synchrotron losses, allows them to travel far from their injection location, making their acceleration site difficult to constrain. As far as the highest energy (X and gamma-ray emitting) particles are concerned, their acceleration is commonly believed to occur at the pulsar wind termination shock. But since the upstream flow is thought to have non-uniform properties along the shock surface, important constraints on the acceleration mechanism(s) could come from exact knowledge of the location and flow properties where particles are being accelerated. We investigate in detail both topics by means of 2D numerical MHD simulations. Different assumptions on the origin of radio particles and more generally on the injection sites of all particles are considered, and the corresponding emission properties are computed. We discuss the physical constraints that can be inferred from comparison of the synthetic emission properties against multiwavelength observations of the PWN class prototype, the Crab

  18. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  19. Start-to-End Simulations of the LCLS Accelerator and FEL Performance at Very Low Charge

    SciTech Connect

    Ding, Y; Brachmann, A.; Decker, F.-J.; Dowell, D.; Emma, P.; Frisch, J.; Gilevich, S.; Hays, G.; Hering, Ph.; Huang, Z.; Iverson, R.; Loos, H.; Miahnahri, A.; Nuhn, H.-D.; Ratner, D.; Turner, J.; Welch, J.; White, W.; Wu, J.; Pellegrini, C.; /UCLA

    2009-05-26

    The Linac Coherent Light Source (LCLS) is an x-ray Free-electron Laser (FEL) being commissioned at Stanford Linear Accelerator Center (SLAC). Recent beam measurements have shown that, using the LCLS injector-linac-compressors, the beam emittance is very small at 20 pC. In this paper we perform start-to-end simulations of the entire accelerator including the FEL undulator and study the FEL performance versus the bunch charge. At 20 pC charge, these calculations associated with the measured beam parameters suggest the possibility of generating a longitudinally coherent single x-ray spike with 2-femtosecond (fs) duration at a wavelength of 1.5 nm. At 100 pC charge level, our simulations show an x-ray pulse with 10 femtosecond duration and up to 10{sup 12} photons at a wavelength of 1.5 {angstrom}. These results open exciting possibilities for ultrafast science and single shot molecular imaging.

  20. Simulation of photon acceleration upon irradiation of a mylar target by femtosecond laser pulses

    SciTech Connect

    Andreev, Stepan N; Rukhadze, Anri A; Tarakanov, V P; Yakutov, B P

    2010-01-31

    Acceleration of protons is simulated by the particle-in-cell (PIC) method upon irradiation of mylar targets of different thicknesses by femtosecond plane-polarised pulsed laser radiation and at different angles of radiation incidence on the target. The comparison of the results of calculations with the experimental data obtained in recent experiments shows their good agreement. The optimal angle of incidence (458) at which the proton energy achieves its absolute maximum is obtained. (effects of laser radiation on matter)

  1. Open-source graphics processing unit-accelerated ray tracer for optical simulation

    NASA Astrophysics Data System (ADS)

    Mauch, Florian; Gronle, Marc; Lyda, Wolfram; Osten, Wolfgang

    2013-05-01

    Ray tracing still is the workhorse in optical design and simulation. Its basic principle, propagating light as a set of mutually independent rays, implies a linear dependency of the computational effort and the number of rays involved in the problem. At the same time, the mutual independence of the light rays bears a huge potential for parallelization of the computational load. This potential has recently been recognized in the visualization community, where graphics processing unit (GPU)-accelerated ray tracing is used to render photorealistic images. However, precision requirements in optical simulation are substantially higher than in visualization, and therefore performance results known from visualization cannot be expected to transfer to optical simulation one-to-one. In this contribution, we present an open-source implementation of a GPU-accelerated ray tracer, based on nVidias acceleration engine OptiX, that traces in double precision and exploits the massively parallel architecture of modern graphics cards. We compare its performance to a CPU-based tracer that has been developed in parallel.

  2. 3D electromagnetic simulation of spatial autoresonance acceleration of electron beams

    NASA Astrophysics Data System (ADS)

    Dugar-Zhabon, V. D.; González, J. D.; Orozco, E. A.

    2016-02-01

    The results of full electromagnetic simulations of the electron beam acceleration by a TE 112 linear polarized electromagnetic field through Space Autoresonance Acceleration mechanism are presented. In the simulations, both the self-sustaned electric field and selfsustained magnetic field produced by the beam electrons are included into the elaborated 3D Particle in Cell code. In this system, the space profile of the magnetostatic field maintains the electron beams in the acceleration regime along their trajectories. The beam current density evolution is calculated applying the charge conservation method. The full magnetic field in the superparticle positions is found by employing the trilinear interpolation of the mesh node data. The relativistic Newton-Lorentz equation presented in the centered finite difference form is solved using the Boris algorithm that provides visualization of the beam electrons pathway and energy evolution. A comparison between the data obtained from the full electromagnetic simulations and the results derived from the motion equation depicted in an electrostatic approximation is carried out. It is found that the self-sustained magnetic field is a factor which improves the resonance phase conditions and reduces the beam energy spread.

  3. Hybrid Envelope Model/Boosted-Frame Simulations of Laser Wakefield Accelerators

    NASA Astrophysics Data System (ADS)

    Higuera, Adam; Weichman, Kathleen; Abell, Dan; Cowan, Ben; Downer, Michael; Cary, John

    2015-11-01

    Laser wakefield accelerators use a high-intensity laser pulse to drive a wave in a plasma that traps, transports, and accelerates electrons. The Texas Petawatt Laser experiment measures different electron energies (2 GeV) than predicted (7 GeV) by computer simulations. We present and analyze a method for efficiently performing higher-fidelity 3-D, particle-in-cell simulations of laser wakefield acceleration. This method combines previous work on a Laser Envelope Model, which resolves electron self-injection, and boosted-frame simulation, which efficiently models beam propagation in the regime where the Envelope Model is no longer valid. This work is supported by the DOE under Grants No. DE-SC0011617 and DE-SC0012444, by DOE/NSF Grant No. DE-SC0012584, and used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  4. GPU Accelerated Implementation of Density Functional Theory for Hybrid QM/MM Simulations.

    PubMed

    Nitsche, Matías A; Ferreria, Manuel; Mocskos, Esteban E; González Lebrero, Mariano C

    2014-03-11

    The hybrid simulation tools (QM/MM) evolved into a fundamental methodology for studying chemical reactivity in complex environments. This paper presents an implementation of electronic structure calculations based on density functional theory. This development is optimized for performing hybrid molecular dynamics simulations by making use of graphic processors (GPU) for the most computationally demanding parts (exchange-correlation terms). The proposed implementation is able to take advantage of modern GPUs achieving acceleration in relevant portions between 20 to 30 times faster than the CPU version. The presented code was extensively tested, both in terms of numerical quality and performance over systems of different size and composition. PMID:26580175

  5. Initial simulation of MHD instabilites in a high speed plasma accelerator

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Soo; Hughes, Tom; Thio, Francis

    2005-10-01

    High density, high Mach number plasma jets are under development for a variety of critical fusion applications. These applications include fueling, rotation driving, and disruption mitigation in magnetic fusion devices. They also include a range of innovative approaches to high energy density plasmas. FAR-TECH, Inc. has begun 3D MHD simulations using the LSP code [1] to examine such high speed plasma jets. An initial study to benchmark the code is currently underway. The blow-by instability will be simulated in a coaxial plasma accelerator using the 3D LSP code and compared with the 2D MACH2 code results. [1] LSP-Manual-MRC-ABQ-R-1942.pdf

  6. GPU-Accelerated PIC/MCC Simulation of Laser-Plasma Interaction Using BUMBLEBEE

    NASA Astrophysics Data System (ADS)

    Jin, Xiaolin; Huang, Tao; Chen, Wenlong; Wu, Huidong; Tang, Maowen; Li, Bin

    2015-11-01

    The research of laser-plasma interaction in its wide applications relies on the use of advanced numerical simulation tools to achieve high performance operation while reducing computational time and cost. BUMBLEBEE has been developed to be a fast simulation tool used in the research of laser-plasma interactions. BUMBLEBEE uses a 1D3V electromagnetic PIC/MCC algorithm that is accelerated by using high performance Graphics Processing Unit (GPU) hardware. BUMBLEBEE includes a friendly user-interface module and four physics simulators. The user-interface provides a powerful solid-modeling front end and graphical and computational post processing functionality. The solver of BUMBLEBEE has four modules for now, which are used to simulate the field ionization, electron collisional ionization, binary coulomb collision and laser-plasma interaction processes. The ionization characteristics of laser-neutral interaction and the generation of high-energy electrons have been analyzed by using BUMBLEBEE for validation.

  7. A method for the accelerated simulation of micro-embossed topographies in thermoplastic polymers

    NASA Astrophysics Data System (ADS)

    Taylor, Hayden; Hale, Melinda; Cheong Lam, Yee; Boning, Duane

    2010-06-01

    Users of hot micro-embossing often wish to simulate numerically the topographies produced by the process. We have previously demonstrated a fast simulation technique that encapsulates the embossed layer's viscoelastic properties using the response of its surface topography to a mechanical impulse applied at a single location. The simulated topography is the convolution of this impulse response with an iteratively found stamp-polymer contact-pressure distribution. Here, we show how the simulation speed can be radically increased by abstracting feature-rich embossing-stamp designs. The stamp is divided into a grid of regions, each characterized by feature shape, pitch and areal density. The simulation finds a contact-pressure distribution at the resolution of the grid, from which the completeness of pattern replication is predicted. For a 25 mm square device design containing microfluidic features down to 5 µm diameter, simulation can be completed within 10 s, as opposed to the 104 s expected if each stamp feature were represented individually. We verify the accuracy of our simulation procedure by comparison with embossing experiments. We also describe a way of abstracting designs at multiple levels of spatial resolution, further accelerating the simulation of patterns whose detail is contained in a small proportion of their area.

  8. Simulations of ion acceleration at non-relativistic shocks. II. Magnetic field amplification

    SciTech Connect

    Caprioli, D.; Spitkovsky, A.

    2014-10-10

    We use large hybrid simulations to study ion acceleration and generation of magnetic turbulence due to the streaming of particles that are self-consistently accelerated at non-relativistic shocks. When acceleration is efficient, we find that the upstream magnetic field is significantly amplified. The total amplification factor is larger than 10 for shocks with Alfvénic Mach number M = 100, and scales with the square root of M. The spectral energy density of excited magnetic turbulence is determined by the energy distribution of accelerated particles, and for moderately strong shocks (M ≲ 30) agrees well with the prediction of resonant streaming instability, in the framework of quasilinear theory of diffusive shock acceleration. For M ≳ 30, instead, Bell's non-resonant hybrid (NRH) instability is predicted and found to grow faster than resonant instability. NRH modes are excited far upstream by escaping particles, and initially grow without disrupting the current, their typical wavelengths being much shorter than the current ions' gyroradii. Then, in the nonlinear stage, most unstable modes migrate to larger and larger wavelengths, eventually becoming resonant in wavelength with the driving ions, which start diffuse. Ahead of strong shocks we distinguish two regions, separated by the free-escape boundary: the far upstream, where field amplification is provided by the current of escaping ions via NRH instability, and the shock precursor, where energetic particles are effectively magnetized, and field amplification is provided by the current in diffusing ions. The presented scalings of magnetic field amplification enable the inclusion of self-consistent microphysics into phenomenological models of ion acceleration at non-relativistic shocks.

  9. Issues for Simulation of Galactic Cosmic Ray Exposures for Radiobiological Research at Ground-Based Accelerators.

    PubMed

    Kim, Myung-Hee Y; Rusek, Adam; Cucinotta, Francis A

    2015-01-01

    For radiobiology research on the health risks of galactic cosmic rays (GCR) ground-based accelerators have been used with mono-energetic beams of single high charge, Z and energy, E (HZE) particles. In this paper, we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL), we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model) to define a GCR reference field using 9 HZE particle beam-energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and (4)He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However, a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation, with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving knowledge of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology research. PMID:26090339

  10. VMC++ versus BEAMnrc: A comparison of simulated linear accelerator heads for photon beams

    SciTech Connect

    Hasenbalg, F.; Fix, M. K.; Born, E. J.; Mini, R.; Kawrakow, I.

    2008-04-15

    BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been benchmarked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.

  11. Issues for Simulation of Galactic Cosmic Ray Exposures for Radiobiological Research at Ground-Based Accelerators

    PubMed Central

    Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.

    2015-01-01

    For radiobiology research on the health risks of galactic cosmic rays (GCR) ground-based accelerators have been used with mono-energetic beams of single high charge, Z and energy, E (HZE) particles. In this paper, we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL), we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model) to define a GCR reference field using 9 HZE particle beam–energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and 4He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However, a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation, with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving knowledge of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology research. PMID:26090339

  12. Simulating the effects of timing and energy stability in a laser wakefield accelerator with external injection

    SciTech Connect

    Dijk, W. van; Corstens, J. M.; Stragier, X. F. D.; Brussaard, G. J. H.; Geer, S. B. van der

    2009-01-22

    One of the most compelling reasons to use external injection of electrons into a laser wakefield accelerator is to improve the stability and reproducibility of the accelerated electrons. We have built a simulation tool based on particle tracking to investigate the expected output parameters. Specifically, we are simulating the variations in energy and bunch charge under the influence of variations in laser power and timing jitter. In these simulations a a{sub 0} = 0.32 to a{sub 0} = 1.02 laser pulse with 10% shot-to-shot energy fluctuation is focused into a plasma waveguide with a density of 1.0x10{sup 24} m{sup -3} and a calculated matched spot size of 50.2 {mu}m. The timing of the injected electron bunch with respect to the laser pulse is varied from up to 1 ps from the standard timing (1 ps ahead or behind the laser pulse, depending on the regime). The simulation method and first results will be presented. Shortcomings and possible extensions to the model will be discussed.

  13. Monte Carlo Simulation of the Irradiation of Alanine Coated Film Dosimeters with Accelerated Electrons

    NASA Astrophysics Data System (ADS)

    Uribe, R. M.; Salvat, F.; Cleland, M. R.; Berejka, A.

    2009-03-01

    The Monte Carlo code PENELOPE was used to simulate the irradiation of alanine coated film dosimeters with electron beams of energies from 1 to 5 MeV being produced by a high-current industrial electron accelerator. This code includes a geometry package that defines complex quadratic geometries, such as those of the irradiation of products in an irradiation processing facility. In the present case the energy deposited on a water film at the surface of a wood parallelepiped was calculated using the program PENMAIN, which is a generic main program included in the PENELOPE distribution package. The results from the simulation were then compared with measurements performed by irradiating alanine film dosimeters with electrons using a 150 kW Dynamitron™ electron accelerator. The alanine films were placed on top of a set of wooden planks using the same geometrical arrangement as the one used for the simulation. The way the results from the simulation can be correlated with the actual measurements, taking into account the irradiation parameters, is described. An estimation of the percentage difference between measurements and calculations is also presented.

  14. Synergy Between Experiments and Simulations in Laser and Beam-Driven Plasma Acceleration and Light Sources

    NASA Astrophysics Data System (ADS)

    Mori, Warren B.

    2015-11-01

    Computer simulations have been an integral part of plasma physics research since the early 1960s. Initially, they provided the ability to confirm and test linear and nonlinear theories in one-dimension. As simulation capabilities and computational power improved, then simulations were also used to test new ideas and applications of plasmas in multi-dimensions. As progress continued, simulations were also used to model experiments. Today computer simulations of plasmas are ubiquitously used to test new theories, understand complicated nonlinear phenomenon, model the full temporal and spatial scale of experiments, simulate parameters beyond the reach of current experiments, and test the performance of new devices before large capital expenditures are made to build them. In this talk I review the progress in simulations in a particular area of plasma physics: plasma based acceleration (PBA). In PBA a short laser pulse or particle beam propagates through long regions of plasma creating plasma wave wakefields on which electrons or positrons surf to high energies. In some cases the wakefields are highly nonlinear, involve three-dimensional effects, and the trajectories of plasma particles cross making it essential that fully kinetic and three-dimensional models are used. I will show how particle-in-cell (PIC) simulations were initially used to propose the basic idea of PBA in one dimension. I will review some of the dramatic progress in the experimental demonstration of PBA and show how this progress was dramatically helped by a synergy between experiments and full-scale multi-dimensional PIC simulations. This will include a review of how the capability of PIC simulation tools has improved. I will also touch on some recent progress on improvements to PIC simulations of PBA and discuss how these improvements may push the synergy further towards real time steering of experiments and start to end modeling of key components of a future linear collider or XFEL based on PBA

  15. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2007-09-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  16. Measurement of performance using acceleration control and pulse control in simulated spacecraft docking operations

    NASA Technical Reports Server (NTRS)

    Brody, Adam R.; Ellis, Stephen R.

    1992-01-01

    Nine commercial airline pilots served as test subjects in a study to compare acceleration control with pulse control in simulated spacecraft maneuvers. Simulated remote dockings of an orbital maneuvering vehicle (OMV) to a space station were initiated from 50, 100, and 150 meters along the station's -V-bar (minus velocity vector). All unsuccessful missions were reflown. Five way mixed analysis of variance (ANOVA) with one between factor, first mode, and four within factors (mode, bloch, range, and trial) were performed on the data. Recorded performance measures included mission duration and fuel consumption along each of the three coordinate axes. Mission duration was lower with pulse mode, while delta V (fuel consumption) was lower with acceleration mode. Subjects used more fuel to travel faster with pulse mode than with acceleration mode. Mission duration, delta V, X delta V, Y delta V., and Z delta V all increased with range. Subjects commanded the OMV to 'fly' at faster rates from further distances. These higher average velocities were paid for with increased fuel consumption. Asymmetrical transfer was found in that the mode transitions could not be predicted solely from the mission duration main effect. More testing is advised to understand the manual control aspects of spaceflight maneuvers better.

  17. Accelerated multiscale space-time finite element simulation and application to high cycle fatigue life prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Wen, Lihua; Naboulsi, Sam; Eason, Thomas; Vasudevan, Vijay K.; Qian, Dong

    2016-05-01

    A multiscale space-time finite element method based on time-discontinuous Galerkin and enrichment approach is presented in this work with a focus on improving the computational efficiencies for high cycle fatigue simulations. While the robustness of the TDG-based space-time method has been extensively demonstrated, a critical barrier for the extensive application is the large computational cost due to the additional temporal dimension and enrichment that are introduced. The present implementation focuses on two aspects: firstly, a preconditioned iterative solver is developed along with techniques for optimizing the matrix storage and operations. Secondly, parallel algorithms based on multi-core graphics processing unit are established to accelerate the progressive damage model implementation. It is shown that the computing time and memory from the accelerated space-time implementation scale with the number of degree of freedom N through {˜ }{O}(N^{1.6}) and {˜ }{O}(N) , respectively. Finally, we demonstrate the accelerated space-time FEM simulation through benchmark problems.

  18. Accelerated multiscale space-time finite element simulation and application to high cycle fatigue life prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Wen, Lihua; Naboulsi, Sam; Eason, Thomas; Vasudevan, Vijay K.; Qian, Dong

    2016-08-01

    A multiscale space-time finite element method based on time-discontinuous Galerkin and enrichment approach is presented in this work with a focus on improving the computational efficiencies for high cycle fatigue simulations. While the robustness of the TDG-based space-time method has been extensively demonstrated, a critical barrier for the extensive application is the large computational cost due to the additional temporal dimension and enrichment that are introduced. The present implementation focuses on two aspects: firstly, a preconditioned iterative solver is developed along with techniques for optimizing the matrix storage and operations. Secondly, parallel algorithms based on multi-core graphics processing unit are established to accelerate the progressive damage model implementation. It is shown that the computing time and memory from the accelerated space-time implementation scale with the number of degree of freedom N through ˜ O(N^{1.6}) and ˜ O(N), respectively. Finally, we demonstrate the accelerated space-time FEM simulation through benchmark problems.

  19. Acceleration of hybrid MPI parallel NBODY6++ for large N-body globular cluster simulations

    NASA Astrophysics Data System (ADS)

    Wang, Long; Spurzem, Rainer; Aarseth, Sverre; Nitadori, Keigo; Berczik, Peter; Kouwenhoven, M. B. N.; Naab, Thorsten

    2016-02-01

    Previous research on globular clusters (GCs) dynamics is mostly based on semi-analytic, Fokker-Planck, Monte-Carlo methods and on direct N-body (NB) simulations. These works have great advantages but also limits since GCs are massive and compact and close encounters and binaries play very important roles in their dynamics. The former three methods make approximations and assumptions, while expensive computing time and number of stars limit the latter method. The current largest direct NB simulation has ~ 500k stars (Heggie 2014). Here, we accelerate the direct NB code NBODY6++ (which extends NBODY6 to supercomputers by using MPI) with new parallel computing technologies (GPU, OpenMP + SSE/AVX). Our aim is to handle large N (up to 106) direct NB simulations to obtain better understanding of the dynamical evolution of GCs.

  20. Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.

    SciTech Connect

    Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael; Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl

    2011-09-01

    Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.

  1. 3-D Simulations of Plasma Wakefield Acceleration with Non-Idealized Plasmas and Beams

    SciTech Connect

    Deng, S.; Katsouleas, T.; Lee, S.; Muggli, P.; Mori, W.B.; Hemker, R.; Ren, C.; Huang, C.; Dodd, E.; Blue, B.E.; Clayton, C.E.; Joshi, C.; Wang, S.; Decker, F.J.; Hogan, M.J.; Iverson, R.H.; O'Connell, C.; Raimondi, P.; Walz, D.; /SLAC

    2005-09-27

    3-D Particle-in-cell OSIRIS simulations of the current E-162 Plasma Wakefield Accelerator Experiment are presented in which a number of non-ideal conditions are modeled simultaneously. These include tilts on the beam in both planes, asymmetric beam emittance, beam energy spread and plasma inhomogeneities both longitudinally and transverse to the beam axis. The relative importance of the non-ideal conditions is discussed and a worst case estimate of the effect of these on energy gain is obtained. The simulation output is then propagated through the downstream optics, drift spaces and apertures leading to the experimental diagnostics to provide insight into the differences between actual beam conditions and what is measured. The work represents a milestone in the level of detail of simulation comparisons to plasma experiments.

  2. Numerical simulations of input and output couplers for linear accelerator structures

    SciTech Connect

    Ng, C.K.; Ko, K.

    1993-04-01

    We present the numerical procedures involved in the design of coupler cavities for accelerator sections for linear colliders. The MAFIA code is used to simulate an X-band accelerator section with a symmetrical double-input coupler at each end. The transmission properties of the structure are calculated in the time domain and the dimensions of the coupler cavities are adjusted until the power coupling is optimized and frequency synchronism is obtained. We compare the performance of the symmetrical double-input design with that of the conventional single-input type by evaluating the field amplitude and phase asymmetries. We also evaluate the peak gradient in the coupler and discuss the implication of pulse rise time on dark current generation.

  3. Hybrid-PIC Algorithms for Simulation of Large-Scale Plasma Jet Accelerators

    NASA Astrophysics Data System (ADS)

    Thoma, Carsten; Welch, Dale

    2009-11-01

    Merging coaxial plasma jets are envisioned for use in magneto-inertial fusion schemes as the source of an imploding plasma liner. An experimental program at HyperV is considering the generation of large plasma jets (length scales on the order of centimeters) at high densities (10^16-10^17 cm-3) in long coaxial accelerators. We describe the Hybrid particle-in-cell (PIC) methods implemented in the code LSP for this parameter regime and present simulation results of the HyperV accelerator. A radiation transport algorithm has also been implemented into LSP so that the effect of radiation cooling on the jet mach number can be included self-consistently into the Hybrid PIC formalism.

  4. Targeting Atmospheric Simulation Algorithms for Large Distributed Memory GPU Accelerated Computers

    SciTech Connect

    Norman, Matthew R

    2013-01-01

    Computing platforms are increasingly moving to accelerated architectures, and here we deal particularly with GPUs. In [15], a method was developed for atmospheric simulation to improve efficiency on large distributed memory machines by reducing communication demand and increasing the time step. Here, we improve upon this method to further target GPU accelerated platforms by reducing GPU memory accesses, removing a synchronization point, and better clustering computations. The modification ran over two times faster in some cases even though more computations were required, demonstrating the merit of improving memory handling on the GPU. Furthermore, we discover that the modification also has a near 100% hit rate in fast on-chip L1 cache and discuss the reasons for this. In concluding, we remark on further potential improvements to GPU efficiency.

  5. Laboratory simulation of ion acceleration in the presence of lower hybrid waves

    NASA Astrophysics Data System (ADS)

    McWilliams, R.; Koslover, R.; Boehmer, H.; Rynn, N.

    Ion acceleration perpendicular to the geomagnetic field has been observed by satellites and rockets in the suprauroral region. Also found are broadband lower-hybrid waves, and, at higher altitudes, conical upward-flowing ion distributions. The UCI Q-machine has been used to simulate the effect of lower hybrid waves on ion acceleration. Laser induced fluorescence was used for high resolution, non-perturbing measurements of the ion velocity distribution function. The plasma consisted of a 1 m long, 5 cm diameter barium plasma of densities on the order of 1010 per cm3 contained by a 3 kG magnetic field. Substantial changes in the perpendicular ion distribution were found. Main-body ion heating occurred along with non-maxwellian tail production. Over a 10 dB change in input wave power we observed up to a factor of 3 enhancement in main-body ion temperature.

  6. Benchmarking shielding simulations for an accelerator-driven spallation neutron source

    NASA Astrophysics Data System (ADS)

    Cherkashyna, Nataliia; DiJulio, Douglas D.; Panzner, Tobias; Rantsiou, Emmanouela; Filges, Uwe; Ehlers, Georg; Bentley, Phillip M.

    2015-08-01

    The shielding at an accelerator-driven spallation neutron facility plays a critical role in the performance of the neutron scattering instruments, the overall safety, and the total cost of the facility. Accurate simulation of shielding components is thus key for the design of upcoming facilities, such as the European Spallation Source (ESS), currently in construction in Lund, Sweden. In this paper, we present a comparative study between the measured and the simulated neutron background at the Swiss Spallation Neutron Source (SINQ), at the Paul Scherrer Institute (PSI), Villigen, Switzerland. The measurements were carried out at several positions along the SINQ monolith wall with the neutron dosimeter WENDI-2, which has a well-characterized response up to 5 GeV. The simulations were performed using the Monte-Carlo radiation transport code geant4, and include a complete transport from the proton beam to the measurement locations in a single calculation. An agreement between measurements and simulations is about a factor of 2 for the points where the measured radiation dose is above the background level, which is a satisfactory result for such simulations spanning many energy regimes, different physics processes and transport through several meters of shielding materials. The neutrons contributing to the radiation field emanating from the monolith were confirmed to originate from neutrons with energies above 1 MeV in the target region. The current work validates geant4 as being well suited for deep-shielding calculations at accelerator-based spallation sources. We also extrapolate what the simulated flux levels might imply for short (several tens of meters) instruments at ESS.

  7. Numerical simulations of Hall-effect plasma accelerators on a magnetic-field-aligned mesh.

    PubMed

    Mikellides, Ioannis G; Katz, Ira

    2012-10-01

    The ionized gas in Hall-effect plasma accelerators spans a wide range of spatial and temporal scales, and exhibits diverse physics some of which remain elusive even after decades of research. Inside the acceleration channel a quasiradial applied magnetic field impedes the current of electrons perpendicular to it in favor of a significant component in the E×B direction. Ions are unmagnetized and, arguably, of wide collisional mean free paths. Collisions between the atomic species are rare. This paper reports on a computational approach that solves numerically the 2D axisymmetric vector form of Ohm's law with no assumptions regarding the resistance to classical electron transport in the parallel relative to the perpendicular direction. The numerical challenges related to the large disparity of the transport coefficients in the two directions are met by solving the equations on a computational mesh that is aligned with the applied magnetic field. This approach allows for a large physical domain that extends more than five times the thruster channel length in the axial direction and encompasses the cathode boundary where the lines of force can become nonisothermal. It also allows for the self-consistent solution of the plasma conservation laws near the anode boundary, and for simulations in accelerators with complex magnetic field topologies. Ions are treated as an isothermal, cold (relative to the electrons) fluid, accounting for the ion drag in the momentum equation due to ion-neutral (charge-exchange) and ion-ion collisions. The density of the atomic species is determined using an algorithm that eliminates the statistical noise associated with discrete-particle methods. Numerical simulations are presented that illustrate the impact of the above-mentioned features on our understanding of the plasma in these accelerators. PMID:23214706

  8. Ion acceleration in quasi-perpendicular PIC simulations of a reforming heliospheric termination shock

    NASA Astrophysics Data System (ADS)

    Lee, R. E.; Chapman, S. C.; Dendy, R. O.

    2003-12-01

    Recent Particle-in-cell (PIC) simulations have revealed time-dependent shock solutions for parameters relevant to astrophysical and heliospheric shocks [1,2,3]. These solutions are characterised by a shock which cyclically reforms on the spatio-temporal scales of the incoming protons. Whether a shock solution is stationary or reforming depends not only upon the correct treatment of the electrons, but also on the plasma parameters, the upstream β in particular. In the case of the heliospheric termination shock these parameters are not well determined, however, some estimates suggest that the termination shock may be in a parameter regime such that it is time-dependent. It has been pointed out [3] that this will switch off some acceleration mechanisms, for example shock surfing, which has been proposed previously for time-stationary shock solutions. The introduction of time-dependent electromagnetic fields intrinsic to the shock does however introduce the possibility of new mechanisms for the acceleration of protons. Here we present for the first time one such process as revealed by high phase space resolution 1.5D PIC simulations in which all vector quantities are three dimensional, the solution then varying with the spatial coordinate and time. We find that a subset of the protons that reflect off the reforming shock front are accelerated by subsequent interaction with the shock to form a suprathermal population which then propagates into the downstream region with energies of order six times the upstream inflow energy. These may provide an injection population for further acceleration to cosmic ray energies. [1] Shimada, N., and M. Hoshino, Astrophys. J, 543, L67, 2000. [2] Schmitz, H., S.C. Chapman and R.O. Dendy, Astrophys. J, 570, 637, 2002 [3] Scholer, M., I. Shinohara and S. Matsukiyo, J. Geophys. Res., 108, 1014, 2003

  9. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac

  10. Accelerating Markov chain Monte Carlo simulation through sequential updating and parallel computing

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao

    Monte Carlo simulation is a statistical sampling method used in studies of physical systems with properties that cannot be easily obtained analytically. The phase behavior of the Restricted Primitive Model of electrolyte solutions on the simple cubic lattice is studied using grand canonical Monte Carlo simulations and finite-size scaling techniques. The transition between disordered and ordered, NaCl-like structures is continuous, second-order at high temperatures and discrete, first-order at low temperatures. The line of continuous transitions meets the line of first-order transitions at a tricritical point. A new algorithm-Random Skipping Sequential (RSS) Monte Carl---is proposed, justified and shown analytically to have better mobility over the phase space than the conventional Metropolis algorithm satisfying strict detailed balance. The new algorithm employs sequential updating, and yields greatly enhanced sampling statistics than the Metropolis algorithm with random updating. A parallel version of Markov chain theory is introduced and applied in accelerating Monte Carlo simulation via cluster computing. It is shown that sequential updating is the key to reduce the inter-processor communication or synchronization which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time by the new method for systems of large and moderate sizes.

  11. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  12. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    SciTech Connect

    Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  13. Contact detection acceleration in pebble flow simulation for pebble bed reactor systems

    SciTech Connect

    Li, Y.; Ji, W.

    2013-07-01

    Pebble flow simulation plays an important role in the steady state and transient analysis of thermal-hydraulics and neutronics for Pebble Bed Reactors (PBR). The Discrete Element Method (DEM) and the modified Molecular Dynamics (MD) method are widely used to simulate the pebble motion to obtain the distribution of pebble concentration, velocity, and maximum contact stress. Although DEM and MD present high accuracy in the pebble flow simulation, they are quite computationally expensive due to the large quantity of pebbles to be simulated in a typical PBR and the ubiquitous contacts and collisions between neighboring pebbles that need to be detected frequently in the simulation, which greatly restricted their applicability for large scale PBR designs such as PBMR400. Since the contact detection accounts for more than 60% of the overall CPU time in the pebble flow simulation, the acceleration of the contact detection can greatly enhance the overall efficiency. In the present work, based on the design features of PBRs, two contact detection algorithms, the basic cell search algorithm and the bounding box search algorithm are investigated and applied to pebble contact detection. The influence from the PBR system size, core geometry and the searching cell size on the contact detection efficiency is presented. Our results suggest that for present PBR applications, the bounding box algorithm is less sensitive to the aforementioned effects and has superior performance in pebble contact detection compared with basic cell search algorithm. (authors)

  14. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  15. A 3D MPI-Parallel GPU-accelerated framework for simulating ocean wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Raessi, Mehdi

    2015-11-01

    We present an MPI-parallel GPU-accelerated computational framework for studying the interaction between ocean waves and wave energy converters (WECs). The computational framework captures the viscous effects, nonlinear fluid-structure interaction (FSI), and breaking of waves around the structure, which cannot be captured in many potential flow solvers commonly used for WEC simulations. The full Navier-Stokes equations are solved using the two-step projection method, which is accelerated by porting the pressure Poisson equation to GPUs. The FSI is captured using the numerically stable fictitious domain method. A novel three-phase interface reconstruction algorithm is used to resolve three phases in a VOF-PLIC context. A consistent mass and momentum transport approach enables simulations at high density ratios. The accuracy of the overall framework is demonstrated via an array of test cases. Numerical simulations of the interaction between ocean waves and WECs are presented. Funding from the National Science Foundation CBET-1236462 grant is gratefully acknowledged.

  16. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  17. Mixed-field GCR Simulations for Radiobiological Research using Ground Based Accelerators

    NASA Astrophysics Data System (ADS)

    Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis

    Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20 percents accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.

  18. Mixed-field GCR Simulations for Radiobiological Research Using Ground Based Accelerators

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.

    2014-01-01

    Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20% accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.

  19. Accelerating forward and adjoint simulations of seismic wave propagation on large GPU-clusters

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Rietmann, M.; Charles, J.; Messmer, P.; Komatitsch, D.; Schenk, O.; Tromp, J.

    2012-12-01

    In seismic tomography, waveform inversions require accurate simulations of seismic wave propagation in complex media.The current versions of our spectral-element method (SEM) packages, the local-scale code SPECFEM3D and the global-scale code SPECFEM3D_GLOBE, are widely used open-source community codes which simulate seismic wave propagation for local-, regional- and global-scale applications. These numerical simulations compute highly accurate seismic wavefields, accounting for fully 3D Earth models. However, code performance often governs whether seismic inversions become feasible or remain elusive. We report here on extending these high-order finite-element packages to further exploit graphic processing units (GPUs) and perform numerical simulations of seismic wave propagation on large GPU clusters. These enhanced packages can be readily run either on multi-core CPUs only or together with many-core GPU acceleration devices. One of the challenges in parallelizing finite element codes is the potential for race conditions during the assembly phase. We therefore investigated different methods such as mesh coloring or atomic updates on the GPU. In order to achieve strong scaling, we needed to ensure good overlap of data motion at all levels, including internode and host-accelerator transfers. These new MPI/CUDA solvers exhibit excellent scalability and achieve speedup on a node-to-node basis over the carefully tuned equivalent multi-core MPI solver. We present case studies run on a Cray XK6 GPU architecture up to 896 nodes to demonstrate the performance of both the forward and adjoint functionality of the code packages. Running simulations on such dedicated GPU clusters further reduces computation times and pushes seismic inversions into a new, higher frequency realm.

  20. Simulations of a High-Transformer-Ratio Plasma Wakefield Accelerator Using Multiple Electron Bunches

    SciTech Connect

    Kallos, Efthymios; Muggli, Patric; Katsouleas, Thomas; Yakimenko, Vitaly; Park, Jangho

    2009-01-22

    Particle-in-cell simulations of a plasma wakefield accelerator in the linear regime are presented, consisting of four electron bunches that are fed into a high-density plasma. It is found that a high transformer ratio can be maintained over 43 cm of plasma if the charge in each bunch is increased linearly, the bunches are placed 1.5 plasma wavelengths apart and the bunch emmitances are adjusted to compensate for the nonlinear focusing forces. The generated wakefield is sampled by a test witness bunch whose energy gain after the plasma is six times the energy loss of the drive bunches.

  1. High energy gain in three-dimensional simulations of light sail acceleration

    SciTech Connect

    Sgattoni, A.; Sinigardi, S.; Macchi, A.

    2014-08-25

    The dynamics of radiation pressure acceleration in the relativistic light sail regime are analysed by means of large scale, three-dimensional (3D) particle-in-cell simulations. Differently to other mechanisms, the 3D dynamics leads to faster and higher energy gain than in 1D or 2D geometry. This effect is caused by the local decrease of the target density due to transverse expansion leading to a “lighter sail.” However, the rarefaction of the target leads to an earlier transition to transparency limiting the energy gain. A transverse instability leads to a structured and inhomogeneous ion distribution.

  2. Simulation of Cascaded Longitudinal-Space-Charge Amplifier at the Fermilab Accelerator Science & Technology (Fast) Facility

    SciTech Connect

    Halavanau, A.; Piot, P.

    2015-12-01

    Cascaded Longitudinal Space Charge Amplifiers (LSCA) have been proposed as a mechanism to generate density modulation over a board spectral range. The scheme has been recently demonstrated in the optical regime and has confirmed the production of broadband optical radiation. In this paper we investigate, via numerical simulations, the performance of a cascaded LSCA beamline at the Fermilab Accelerator Science & Technology (FAST) facility to produce broadband ultraviolet radiation. Our studies are carried out using elegant with included tree-based grid-less space charge algorithm.

  3. Implicit Monte Carlo diffusion - an acceleration method for Monte Carlo time dependent radiative transfer simulations

    SciTech Connect

    Gentile, N A

    2000-10-01

    We present a method for accelerating time dependent Monte Carlo radiative transfer calculations by using a discretization of the diffusion equation to calculate probabilities that are used to advance particles in regions with small mean free path. The method is demonstrated on problems with on 1 and 2 dimensional orthogonal grids. It results in decreases in run time of more than an order of magnitude on these problems, while producing answers with accuracy comparable to pure IMC simulations. We call the method Implicit Monte Carlo Diffusion, which we abbreviate IMD.

  4. Simulation studies of crystal-photodetector assemblies for the Turkish accelerator center particle factory electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Kocak, F.

    2015-07-01

    The Turkish Accelerator Center Particle Factory detector will be constructed for the detection of the produced particles from the collision of a 1 GeV electron beam against a 3.6 GeV positron beam. PbWO4 and CsI(Tl) crystals are considered for the construction of the electromagnetic calorimeter part of the detector. The generated optical photons in these crystals are detected by avalanche or PIN photodiodes. Geant4 simulation code has been used to estimate the energy resolution of the calorimeter for these crystal-photodiode assemblies.

  5. Simulations of flame acceleration and deflagration-to-detonation transitions in methane-air systems

    SciTech Connect

    Kessler, D.A.; Gamezo, V.N.; Oran, E.S.

    2010-11-15

    Flame acceleration and deflagration-to-detonation transitions (DDT) in large obstructed channels filled with a stoichiometric methane-air mixture are simulated using a single-step reaction mechanism. The reaction parameters are calibrated using known velocities and length scales of laminar flames and detonations. Calculations of the flame dynamics and DDT in channels with obstacles are compared to previously reported experimental data. The results obtained using the simple reaction model qualitatively, and in many cases, quantitatively match the experiments and are found to be largely insensitive to small variations in model parameters. (author)

  6. Numerical simulation study of positron production by intense laser-accelerated electrons

    SciTech Connect

    Yan, Yonghong; Science and Technology on Plasma Physics Laboratory, Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang 621900 ; Dong, Kegong; Wu, Yuchi; Zhang, Bo; Gu, Yuqiu; Yao, Zeen

    2013-10-15

    Positron production by ultra-intense laser-accelerated electrons has been studied with two-dimensional particle-in-cell and Monte Carlo simulations. The dependence of the positron yield on plasma density, plasma length, and converter thickness was investigated in detail with fixed parameters of a typical 100 TW laser system. The results show that with the optimal plasma and converter parameters a positron beam containing up to 1.9 × 10{sup 10} positrons can be generated, which has a small divergence angle (10°), a high temperature (67.2 MeV), and a short pulse duration (1.7 ps)

  7. Laboratory simulation of ion acceleration in the presence of lower hybrid waves

    NASA Astrophysics Data System (ADS)

    McWilliams, R.; Koslover, R.; Boehmer, H.; Rynn, N.

    The UCI Q-machine has been used to simulate the effect of lower hybrid waves on ion acceleration. Laser induced fluorescence was used for high resolution, nonperturbing measurements of the ion velocity distribution function. The plasma consisted of a 1 m long, 5 cm diameter barium plasma of densities on the order of 10 to the 10th per cu cm contained by a 3 kG magnetic field. Substantial changes in the perpendicular ion distribution were found. Main-body ion heating occurred along with non-Maxwellian tail production.

  8. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    SciTech Connect

    Katsouleas, Thomas; Decyk, Viktor

    2009-10-14

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as described here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley

  9. 3D MHD Simulations of the May 2, 1998 halo CME: Shock formation and SEP acceleration

    NASA Astrophysics Data System (ADS)

    Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.; Forbes, T. G.; Lee, M. A.

    We present the results of two numerical models of the partial-halo CME event associated with NOAA AR8210 on May 2, 1998. Our simulations are fully three-dimensional and involve compressible magnetohydrodynamics with turbulent energy transport. We begin by first producing a steady-state solar wind for Carrington Rotation 1935/6, following the methodology described in Roussev et al. (2003). We impose shearing motions along the polarity inversion line of AR8210, followed by converging motions, both via the modification of the boundary conditions at the Sun's surface. As a consequence, a flux rope forms within the sheared arcade during the CME. The flux rope gradually accelerates, leaving behind the remnants of a flare loop system that results from ongoing magnetic reconnection in the naturally formed current sheet. The flux rope leaves the Sun, forming a CME emerging through a highly structured, ambient solar wind. A shock wave forms in front of the ejected matter. Estimates for the spectral index and cutoff energy for the diffusive solar energetic particle shock acceleration mechanism show that the protons can be efficiently accelerated up to energies 0.1-10 GeV.

  10. Simulation of diatomic gas-wall interaction and accommodation coefficients for negative ion sources and accelerators.

    PubMed

    Sartori, E; Brescaccin, L; Serianni, G

    2016-02-01

    Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production-detrimental for high current negative ion systems such as beam sources for fusion-are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient and energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared. PMID:26931910

  11. Neutron dosimetry in linear electron accelerator during radiotherapy treatment: simulation and experiment

    NASA Astrophysics Data System (ADS)

    Manfredotti, Claudio; Nastasi, U.; Ongaro, C.; Stasi, E.; Zanini, Alessandro

    1995-03-01

    In the electron linear accelerators used for radiotherapy by high energy electrons or gamma rays, there is a non negligible production of neutrons by photodisintegration or electrodisintegration reactions on the high Z components of the head machine (target, flattening filter, collimators). At the Experimental Physics Department of Torino University, Torino, Italy an experimental and theoretical evaluation has been performed on the undesired neutron production in the MD Class Mevatron Siemens accelerator used at the Radiotherapy Department of S. Giovanni Battista A.S. Hospital for cancer therapy by a 15 MV gamma ray beam. A simulation of the total process has been carried out, using EGS4 MonteCarlo computer code for the evaluation of photoneutron spectra and MCNP code for the neutron transport in the patient's body. The geometrical description both of the accelerator head in EGS4 and of the anthropomorphous phantom in MCNP have been highly optimized. Experimental measurements have been carried out by bubble detectors BD 100R appropriately allocated inside a new phantom in polyetylene and plexiglass, especially designed for this purpose.

  12. The influence of combined alignments on lateral acceleration on mountainous freeways: a driving simulator study.

    PubMed

    Wang, Xuesong; Wang, Ting; Tarko, Andrew; Tremont, Paul J

    2015-03-01

    Combined horizontal and vertical alignments are frequently used in mountainous freeways in China; however, design guidelines that consider the safety impact of combined alignments are not currently available. Past field studies have provided some data on the relationship between road alignment and safety, but the effects of differing combined alignments on either lateral acceleration or safety have not systematically examined. The primary reason for this void in past research is that most of the prior studies used observational methods that did not permit control of the key variables. A controlled parametric study is needed that examines lateral acceleration as drivers adjust their speeds across a range of combined horizontal and vertical alignments. Such a study was conducted in Tongji University's eight-degree-of-freedom driving simulator by replicating the full range of combined alignments used on a mountainous freeway in China. Multiple linear regression models were developed to estimate the effects of the combined alignments on lateral acceleration. Based on these models, domains were calculated to illustrate the results and to assist engineers to design safer mountainous freeways. PMID:25626165

  13. Perceived benefits and challenges of repeated exposure to high fidelity simulation experiences of first degree accelerated bachelor nursing students.

    PubMed

    Kaddoura, Mahmoud; Vandyke, Olga; Smallwood, Christopher; Gonzalez, Kristen Mathieu

    2016-01-01

    This study explored perceptions of first-degree entry-level accelerated bachelor nursing students regarding benefits and challenges of exposure to multiple high fidelity simulation (HFS) scenarios, which has not been studied to date. These perceptions conformed to some research findings among Associate Degree, traditional non-accelerated, and second-degree accelerated Bachelor of Science in Nursing (BSN) students faced with one to two simulations. However, first-degree accelerated BSN students faced with multiple complex simulations perceived improvements on all outcomes, including critical thinking, confidence, competence, and theory-practice integration. On the negative side, some reported feeling overwhelmed by the multiple HFS scenarios. Evidence from this study supports HFS as an effective teaching and learning method for nursing students, along with valuable implications for many other fields. PMID:26260522

  14. Effects of simulated weightlessness on responses of untrained men to +Gz acceleration

    NASA Technical Reports Server (NTRS)

    Jacobson, L. B.; Hyatt, K. H.; Sandler, H.

    1974-01-01

    This study documents bedrest-induced metabolic and physiologic changes in six untrained men exposed, following a two-week period of simulated weightlessness, to possible +Gz acceleration profiles anticipated for Space Shuttle vehicle travel. All subjects demonstrated decreased +Gz tolerance following simulated weightlessness. While only one of six subjects could not tolerate the +Gz profile in the control phase of the study, three of the six could not complete the postbed-rest study. The use of an inflated standard Air Force cutaway G-suit improved +Gz tolerance in all subjects, but two of six subjects still failed to complete the profile. These findings are discussed in reference to the selection of untrained humans for Space Shuttle vehicle travel.

  15. The latest results from ELM-simulation experiments in plasma accelerators

    NASA Astrophysics Data System (ADS)

    Garkusha, I. E.; Arkhipov, N. I.; Klimov, N. S.; Makhlaj, V. A.; Safronov, V. M.; Landman, I.; Tereshin, V. I.

    2009-12-01

    Recent results of ELM-simulation experiments with quasi-stationary plasma accelerators (QSPAs) Kh-50 (Kharkov, Ukraine) and QSPA-T (Troitsk, Russia) as well as experiments in the pulsed plasma gun MK-200UG (Troitsk, Russia) are discussed. Primary attention in Troitsk experiments has been focused on investigating the carbon-fibre composite (CFC) and tungsten erosion mechanisms, their onset conditions and the contribution of various erosion mechanisms (including droplet splashing) to the resultant surface damage at varying plasma heat flux. The obtained results are used for validating the numerical codes PEGASUS and MEMOS developed in FZK. Crack patterns and residual stresses in tungsten targets under repetitive edge localized mode (ELM)-like plasma pulses are studied in simulation experiments with QSPA Kh-50. Statistical processing of the experimental results on crack patterns after different numbers of QSPA Kh-50 exposures as well as those on the dependence of cracking on the heat load and surface temperature is performed.

  16. Simulating Electron Effects in Heavy-Ion Accelerators with Solenoid Focusing

    SciTech Connect

    Sharp, W M; Grote, D P; Cohen, R H; Friedman, A; Molvik, A W; Vay, J; Seidl, P; Roy, P K; Coleman, J E; Haber, I

    2007-06-29

    Contamination from electrons is a concern for solenoid-focused ion accelerators being developed for experiments in high-energy-density physics. These electrons, produced directly by beam ions hitting lattice elements or indirectly by ionization of desorbed neutral gas, can potentially alter the beam dynamics, leading to a time-varying focal spot, increased emittance, halo, and possibly electron-ion instabilities. The electrostatic particle-in-cell code WARP is used to simulate electron-cloud studies on the solenoid-transport experiment (STX) at Lawrence Berkeley National Laboratory. We present self-consistent simulations of several STX configurations and compare the results with experimental data in order to calibrate physics parameters in the model.

  17. Simulating Electron Effects in Heavy-Ion Accelerators with Solenoid Focusing

    SciTech Connect

    Sharp, W. M.; Grote, D. P.; Cohen, R. H.; Friedman, A.; Molvik, A. W.; Vay, J.-L.; Seidl, P. A.; Roy, P. K.; Coleman, J. E.; Haber, I.

    2007-06-20

    Contamination from electrons is a concern for solenoid-focused ion accelerators being developed for experiments in high-energy-density physics. These electrons, produced directly by beam ions hitting lattice elements or indirectly by ionization of desorbed neutral gas, can potentially alter the beam dynamics, leading to a time-varying focal spot, increased emittance, halo, and possibly electron-ion instabilities. The electrostatic particle-in-cell code WARP is used to simulate electron-cloud studies on the solenoid-transport experiment (STX) at Lawrence Berkeley National Laboratory. We present self-consistent simulations of several STX configurations and compare the results with experimental data in order to calibrate physics parameters in the model.

  18. The Importance of Simulation Workflow and Data Management in the Accelerated Climate Modeling for Energy Project

    NASA Astrophysics Data System (ADS)

    Bader, D. C.

    2015-12-01

    The Accelerated Climate Modeling for Energy (ACME) Project is concluding its first year. Supported by the Office of Science in the U.S. Department of Energy (DOE), its vision is to be "an ongoing, state-of-the-science Earth system modeling, modeling simulation and prediction project that optimizes the use of DOE laboratory resources to meet the science needs of the nation and the mission needs of DOE." Included in the "laboratory resources," is a large investment in computational, network and information technologies that will be utilized to both build better and more accurate climate models and broadly disseminate the data they generate. Current model diagnostic analysis and data dissemination technologies will not scale to the size of the simulations and the complexity of the models envisioned by ACME and other top tier international modeling centers. In this talk, the ACME Workflow component plans to meet these future needs will be described and early implementation examples will be highlighted.

  19. R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps

    NASA Astrophysics Data System (ADS)

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-01

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  20. Simulating Electron Clouds in High-Current Ion Accelerators with Solenoid Focusing

    SciTech Connect

    Sharp, W; Grote, D; Cohen, R; Friedman, A; Vay, J; Seidl, P; Roy, P; Coleman, J; Armijo, J; Haber, I

    2006-08-15

    Contamination from electrons is a concern for the solenoid-focused ion accelerators being developed for experiments in high-energy-density physics (HEDP). These electrons are produced directly by beam ions hitting lattice elements and intercepting diagnostics, or indirectly by ionization of desorbed neutral gas, and they are believed responsible for time dependence of the beam radius, emittance, and focal distance seen on the Solenoid Transport Experiment (STX) at Lawrence Berkeley National Laboratory. The electrostatic particle-in-cell code WARP has been upgraded to included the physics needed to simulate electron-cloud phenomena. We present preliminary self-consistent simulations of STX experiments suggesting that the observed time dependence of the beam stems from a complicated interaction of beam ions, desorbed neutrals, and electrons.

  1. Simulating Electron Clouds in High-Current Ion Accelerators withSolenoid Focusing

    SciTech Connect

    Sharp, W.M.; Grote, D.P.; Cohen, R.H.; Friedman, A.; Vay, J.-L.; Seidl, P.A.; Roy, P.K.; Coleman, J.E.; Armijo, J.; Haber, I.

    2006-09-20

    Contamination from electrons is a concern for the solenoid-focused ion accelerators being developed for experiments in high-energy-density physics (HEDP). These electrons are produced directly by beam ions hitting lattice elements and intercepting diagnostics, or indirectly by ionization of desorbed neutral gas, and they are believed responsible for time dependence of the beam radius, emittance, and focal distance seen on the Solenoid Transport Experiment (STX) at Lawrence Berkeley National Laboratory. The electrostatic particle-in-cell code WARP has been upgraded to included the physics needed to simulate electron-cloud phenomena. We present preliminary self-consistent simulations of STX experiments suggesting that the observed time dependence of the beam stems from a complicated interaction of beam ions, desorbed neutrals, and electrons.

  2. Multi-cavity complex controller with vector simulator for TESLA technology linear accelerator

    NASA Astrophysics Data System (ADS)

    Czarski, Tomasz; Pozniak, Krzysztof T.; Romaniuk, Ryszard S.; Szewinski, Jaroslaw

    2008-01-01

    A digital control, as the main part of the Low Level RF system, for superconducting cavities of a linear accelerator is presented. The FPGA based controller, supported by MATLAB system, was developed to investigate a novel firmware implementation. The complex control algorithm based on the non-linear system identification is the proposal verified by the preliminary experimental results. The general idea is implemented as the Multi-Cavity Complex Controller (MCC) and is still under development. The FPGA based controller executes procedure according to the prearranged control tables: Feed-Forward, Set-Point and Corrector unit, to fulfill the required cavity performance: driving in the resonance during filling and field stabilization for the flattop range. Adaptive control algorithm is applied for the feed-forward and feedback modes. The vector Simulator table has been introduced for an efficient verification of the FPGA controller structure. Experimental results of the internal simulation, are presented for a cavity representative condition.

  3. Simulation of direct plasma injection for laser ion beam acceleration with a radio frequency quadrupole

    SciTech Connect

    Jin, Q. Y.; Li, Zh. M.; Liu, W.; Zhao, H. Y. Zhang, J. J.; Sha, Sh.; Zhang, Zh. L.; Zhang, X. Zh.; Sun, L. T.; Zhao, H. W.

    2014-07-15

    The direct plasma injection scheme (DPIS) has been being studied at Institute of Modern Physics since several years ago. A C{sup 6+} beam with peak current of 13 mA, energy of 593 keV/u has been successfully achieved after acceleration with DPIS method. To understand the process of DPIS, some simulations have been done as follows. First, with the total current intensity and the relative yields of different charge states for carbon ions measured at the different distance from the target, the absolute current intensities and time-dependences for different charge states are scaled to the exit of the laser ion source in the DPIS. Then with these derived values as the input parameters, the extraction of carbon beam from the laser ion source to the radio frequency quadrupole with DPIS is simulated, which is well agreed with the experiment results.

  4. Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code

    PubMed Central

    Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza

    2013-01-01

    The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique. PMID:24672765

  5. Vertical accelerator device to apply loads simulating blast environments in the military to human surrogates.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Schlick, Michael; Humm, John R; Voo, Liming; Merkle, Andrew; Kleinberger, Michael

    2015-09-18

    The objective of the study was to develop a simple device, Vertical accelerator (Vertac), to apply vertical impact loads to Post Mortem Human Subject (PMHS) or dummy surrogates because injuries sustained in military conflicts are associated with this vector; example, under-body blasts from explosive devices/events. The two-part mechanically controlled device consisted of load-application and load-receiving sections connected by a lever arm. The former section incorporated a falling weight to impact one end of the lever arm inducing a reaction at the other/load-receiving end. The "launch-plate" on this end of the arm applied the vertical impact load/acceleration pulse under different initial conditions to biological/physical surrogates, attached to second section. It is possible to induce different acceleration pulses by using varying energy absorbing materials and controlling drop height and weight. The second section of Vertac had the flexibility to accommodate different body regions for vertical loading experiments. The device is simple and inexpensive. It has the ability to control pulses and flexibility to accommodate different sub-systems/components of human surrogates. It has the capability to incorporate preloads and military personal protective equipment (e.g., combat helmet). It can simulate vehicle roofs. The device allows for intermittent specimen evaluations (x-ray and palpation, without changing specimen alignment). The two free but interconnected sections can be used to advance safety to military personnel. Examples demonstrating feasibilities of the Vertac device to apply vertical impact accelerations using PMHS head-neck preparations with helmet and booted Hybrid III dummy lower leg preparations under in-contact and launch-type impact experiments are presented. PMID:26159057

  6. High-resolution simulations of downslope gravity currents in the acceleration phase

    NASA Astrophysics Data System (ADS)

    Dai, Albert

    2015-07-01

    Gravity currents generated from an instantaneous buoyancy source propagating down a slope in the range of 0∘ ≤ θ < 90∘ have been investigated in the acceleration phase by means of high-resolution two-dimensional simulations of the incompressible Navier-Stokes equations with the Boussinesq approximation. Front velocity history shows that, after the heavy fluid is released from rest, the flow goes through the acceleration phase, reaching a maximum front velocity Uf,max, and followed by the deceleration phase. The existence of a maximum of Uf,max is found near θ = 40∘, which is supported by the improved theory. It is identified for the first time that the time of acceleration decreases as the slope angle increases, when the slope angle is approximately greater than 10∘, and the time of acceleration increases as the slope angle increases for gravity currents on lower slope angles. A fundamental difference in flow patterns, which helps explain the distinct characteristics of gravity currents on high and low slope angles using scaling arguments, is revealed. Energy budgets further show that, as the slope angle increases, the ambient fluid is more easily engaged in the gravitational convection and the potential energy loss is more efficiently converted into the kinetic energy associated with ambient fluid. The propagation of gravity currents on a slope is found to be qualitatively modified as the depth ratio, i.e., the lock height to channel height ratio, approaches unity. As the depth ratio increases, the conversion of potential energy loss into the kinetic energy associated with heavy fluid is inhibited and the conversion into the kinetic energy associated with ambient fluid is enhanced by the confinement of the top wall.

  7. Design and fabrication of plasma accelerator for space micro-debris simulation and preliminary experiment

    NASA Astrophysics Data System (ADS)

    Han, J.; Zhang, Z.; Huang, J.; Li, X.; Chen, Z.; Quan, R.

    A simulation facility for hypervelocity impact of space micro-debris is designed and fabricated Just after the assembly of the facility some preliminary debugging experiments have conducted A plasma accelerator is the core component of the facility which composed of a coaxial discharge electrode an electromagnetic compressing coil and a nozzle The coaxial electrode is discharged synchronously by pulse injected gas from a delicate fabricated electromagnetic valve and pulse voltage from a capacitance bank The time periods for the pulse gas valve to turn on and to feed gas are 400 and 900 microseconds respectively As to the capacitance bank the maximum capacity is 512 mu F and can be charged as high as 30kV Therefore the maximum energy storage for the capacitance and discharge is 230KJ A custom designed control circuit ignites the pulse valve and discharge switch in turn Then a block of plasma is produced and accelerated into the electromagnetic coil where the plasma is compressed denser Eventually a plasma flow with high pressure and temperature is sprayed out the nozzle which pushes a cluster of micro-particles attached closely to the nozzle exit to hypervelocity During the preliminary debugging experiment 128 mu F capacitance is charged to 15kV and 400kA discharge current is generated then glass spheres with 100 mu m diameter is accelerated to 4 3km s Now the debug for the facility is still in progress in the near future it can accelerate micro-particles to higher velocity

  8. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units

    PubMed Central

    2013-01-01

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618

  9. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  10. MCNP Neutron Simulations: The Effectiveness of the University of Kentucky Accelerator Laboratory Pit

    NASA Astrophysics Data System (ADS)

    Jackson, Daniel; Nguyen, Thien An; Hicks, S. F.; Rice, Ben; Vanhoy, J. R.

    2015-10-01

    The design of the Van de Graaff Particle Accelerator complex at the University of Kentucky is marked by the unique addition of a pit in the main neutron scattering room underneath the neutron source and detection shielding assembly. This pit was constructed as a neutron trap in order to decrease the amount of neutron flux within the laboratory. Such a decrease of background neutron flux effectively reduces as much noise as possible in detection of neutrons scattering off of desired samples to be studied. This project uses the Monte-Carlo N-Particle Transport Code (MCNP) to model the structure of the accelerator complex, gas cell, and the detector's collimator and shielding apparatus to calculate the neutron flux in various sections of the laboratory. Simulations were completed with baseline runs of 107 neutrons of energies 4 MeV and 17 MeV, produced respectively by 3H(p,n)3He and 3H(d,n)4He source reactions. In addition, a comparison model of the complex with simply a floor and no pit was designed, and the respective neutron fluxes of both models were calculated and compared. The results of the simulations seem to affirm the validity of the pit design in significantly reducing the overall neutron flux throughout the accelerator complex, which could be used in future designs to increase the precision and reliability of data. This project was supported in part by the DOE NEUP Grant NU-12-KY-UK-0201-05 and the Donald A. Cowan Physics Institute at the University of Dallas.

  11. Simulations of radiation pressure ion acceleration with the VEGA Petawatt laser

    NASA Astrophysics Data System (ADS)

    Stockhausen, Luca C.; Torres, Ricardo; Conejero Jarque, Enrique

    2016-09-01

    The Spanish Pulsed Laser Centre (CLPU) is a new high-power laser facility for users. Its main system, VEGA, is a CPA Ti:Sapphire laser which, in its final phase, will be able to reach Petawatt peak powers in pulses of 30 fs with a pulse contrast of 1 :1010 at 1 ps. The extremely low level of pre-pulse intensity makes this system ideally suited for studying the laser interaction with ultrathin targets. We have used the particle-in-cell (PIC) code OSIRIS to carry out 2D simulations of the acceleration of ions from ultrathin solid targets under the unique conditions provided by VEGA, with laser intensities up to 1022 W cm-2 impinging normally on 20 - 60 nm thick overdense plasmas, with different polarizations and pre-plasma scale lengths. We show how signatures of the radiation pressure-dominated regime, such as layer compression and bunch formation, are only present with circular polarization. By passively shaping the density gradient of the plasma, we demonstrate an enhancement in peak energy up to tens of MeV and monoenergetic features. On the contrary linear polarization at the same intensity level causes the target to blow up, resulting in much lower energies and broader spectra. One limiting factor of Radiation Pressure Acceleration is the development of Rayleigh-Taylor like instabilities at the interface of the plasma and photon fluid. This results in the formation of bubbles in the spatial profile of laser-accelerated proton beams. These structures were previously evidenced both experimentally and theoretically. We have performed 2D simulations to characterize this bubble-like structure and report on the dependency on laser and target parameters.

  12. Front-to-end simulations of the design of a laser wakefield accelerator with external injection

    SciTech Connect

    Urbanus, W.H.; Dijk, W. van; Geer, S.B. van der; Brussaard, G.J.H.; Wiel, M.J. van der

    2006-06-01

    We report the design of a laser wakefield accelerator (LWA) with external injection by a rf photogun and acceleration by a linear wakefield in a capillary discharge channel. The design process is complex due to the large number of intricately coupled free parameters. To alleviate this problem, we performed front-to-end simulations of the complete system. The tool we used was the general particle-tracking code, extended with a module representing the linear wakefield by a two-dimensional traveling wave with appropriate wavelength and amplitude. Given the limitations of existing technology for the longest discharge plasma wavelength ({approx}50 {mu}m) and shortest electron bunch length ({approx}100 {mu}m), we studied the regime in which the wakefield acts as slicer and buncher, while rejecting a large fraction of the injected bunch. The optimized parameters for the injected bunch are 10 pC, 300 fs at 6.7 MeV, to be injected into a 70 mm long channel at a plasma density of 7x10{sup 23} m{sup -3}. A linear wakefield is generated by a 2 TW laser focused to 30 {mu}m. The simulations predict an accelerated output of 0.6 pC, 10 fs bunches at 90 MeV, with energy spread below 10%. The design is currently being implemented. The design process also led to an important conclusion: output specifications directly comparable to those reported recently from 'laser-into-gas jet' experiments are feasible, provided the performance of the rf photogun is considerably enhanced. The paper outlines a photogun design providing such a performance level.

  13. Simulations of ion acceleration from ultrathin targets with the VEGA petawatt laser

    NASA Astrophysics Data System (ADS)

    Stockhausen, Luca C.; Torres, Ricardo; Conejero Jarque, Enrique

    2015-05-01

    The Spanish Pulsed Laser Centre (CLPU) is a new high-power laser facility for users. Its main system, VEGA, is a CPA Ti:Sapphire laser which, in its final phase, will be able to reach petawatt peak powers in pulses of 30 fs with a pulse contrast of 1 : 1010 at 1 ps. The extremely low level of pre-pulse intensity makes this system ideally suited for studying the laser interaction with ultrathin targets. We have used the particle-in-cell (PIC) code OSIRIS to carry out 2D simulations of the acceleration of ions from ultrathin solid targets under the unique conditions provided by VEGA, with laser intensities up to 1022Wcm-2 impinging normally on 5 - 40 nm thick overdense plasmas, with different polarizations and pre-plasma scale lengths. We show how signatures of the radiation pressure dominated regime, such as layer compression and bunch formation, are only present with circular polarization. By passively shaping the density gradient of the plasma, we demonstrate an enhancement in peak energy up to tens of MeV and monoenergetic features. On the contrary linear polarization at the same intensity level causes the target to blow up, resulting in much lower energies and broader spectra. One limiting factor of Radiation Pressure Acceleration is the development of Rayleigh-Taylor like instabilities at the interface of the plasma and photon fluid. This results in the formation of bubbles in the spatial profile of laser-accelerated proton beams. These structures were previously evidenced both experimentally and theoretically. We have performed 2D simulations to characterize this bubble-like structure and report on the dependency on laser and target parameters.

  14. Acceleration of a Particle-in-Cell Code for Space Plasma Simulations with OpenACC

    NASA Astrophysics Data System (ADS)

    Peng, Ivy Bo; Markidis, Stefano; Vaivads, Andris; Vencels, Juris; Deca, Jan; Lapenta, Giovanni; Hart, Alistair; Laure, Erwin

    2015-04-01

    We simulate space plasmas with the Particle-in-cell (PIC) method that uses computational particles to mimic electrons and protons in solar wind and in Earth magnetosphere. The magnetic and electric fields are computed by solving the Maxwell's equations on a computational grid. In each PIC simulation step, there are four major phases: interpolation of fields to particles, updating the location and velocity of each particle, interpolation of particles to grids and solving the Maxwell's equations on the grid. We use the iPIC3D code, which was implemented in C++, using both MPI and OpenMP, for our case study. By November 2014, heterogeneous systems using hardware accelerators such as Graphics Processing Unit (GPUs) and the Many Integrated Core (MIC) coprocessors for high performance computing continue growth in the top 500 most powerful supercomputers world wide. Scientific applications for numerical simulations need to adapt to using accelerators to achieve portability and scalability in the coming exascale systems. In our work, we conduct a case study of using OpenACC to offload the computation intensive parts: particle mover and interpolation of particles to grids, in a massively parallel Particle-in-Cell simulation code, iPIC3D, to multi-GPU systems. We use MPI for inter-node communication for halo exchange and communicating particles. We identify the most promising parts suitable for GPUs accelerator by profiling using CrayPAT. We implemented manual deep copy to address the challenges of porting C++ classes to GPU. We document the necessary changes in the exiting algorithms to adapt for GPU computation. We present the challenges and findings as well as our methodology for porting a Particle-in-Cell code to multi-GPU systems using OpenACC. In this work, we will present the challenges, findings and our methodology of porting a Particle-in-Cell code for space applications as follows: We profile the iPIC3D code by Cray Performance Analysis Tool (CrayPAT) and identify

  15. Fourier analysis of Solar atmospheric numerical simulations accelerated with GPUs (CUDA).

    NASA Astrophysics Data System (ADS)

    Marur, A.

    2015-12-01

    Solar dynamics from the convection zone creates a variety of waves that may propagate through the solar atmosphere. These waves are important in facilitating the energy transfer between the sun's surface and the corona as well as propagating energy throughout the solar system. How and where these waves are dissipated remains an open question. Advanced 3D numerical simulations have furthered our understanding of the processes involved. Fourier transforms to understand the nature of the waves by finding the frequency and wavelength of these waves through the simulated atmosphere, as well as the nature of their propagation and where they get dissipated. In order to analyze the different waves produced by the aforementioned simulations and models, Fast Fourier Transform algorithms will be applied. Since the processing of the multitude of different layers of the simulations (of the order of several 100^3 grid points) would be time intensive and inefficient on a CPU, CUDA, a computing architecture that harnesses the power of the GPU, will be used to accelerate the calculations.

  16. Exploring ligand dissociation pathways from aminopeptidase N using random acceleration molecular dynamics simulation.

    PubMed

    Liu, Ya; Tu, GuoGang; Lai, XiaoPing; Kuang, BinHai; Li, ShaoHua

    2016-10-01

    Aminopeptidase N (APN) is a zinc-dependent ectopeptidase involved in cell proliferation, secretion, invasion, and angiogenesis, and is widely recognized as an important cancer target. However, the mechanisms whereby ligands leave the active site of APN remain unknown. Investigating ligand dissociation processes is quite difficult, both in classical simulation methods and in experimental approaches. In this study, random acceleration molecular dynamics (RAMD) simulation was used to investigate the potential dissociation pathways of ligand from APN. The results revealed three pathways (channels A, B and C) for ligand release. Channel A, which matches the hypothetical channel region, was the most preferred region for bestatin to dissociate from the enzyme, and is probably the major channel for the inner bound ligand. In addition, two alternative channels (channels B and C) were shown to be possible pathways for ligand egression. Meanwhile, we identified key residues controlling the dynamic features of APN channels. Identification of the dissociation routes will provide further mechanistic insights into APN, which will benefit the development of more promising APN inhibitors. Graphical Abstract The release pathways of bestatin inside active site of aminopeptidase N were simulated using RAMD simulation. PMID:27624165

  17. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability.

    PubMed

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2016-06-14

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions. PMID:27305997

  18. Pelegant : a parallel accelerator simulation code for electron generation and tracking.

    SciTech Connect

    Wang, Y.; Borland, M. D.; Accelerator Systems Division

    2006-01-01

    elegant is a general-purpose code for electron accelerator simulation that has a worldwide user base. Recently, many of the time-intensive elements were parallelized using MPI. Development has used modest Linux clusters and the BlueGene/L supercomputer at Argonne National Laboratory. This has provided very good performance for some practical simulations, such as multiparticle tracking with synchrotron radiation and emittance blow-up in the vertical rf kick scheme. The effort began with development of a concept that allowed for gradual parallelization of the code, using the existing beamline-element classification table in elegant. This was crucial as it allowed parallelization without major changes in code structure and without major conflicts with the ongoing evolution of elegant. Because of rounding error and finite machine precision, validating a parallel program against a uniprocessor program with the requirement of bitwise identical results is notoriously difficult. We will report validating simulation results of parallel elegant against those of serial elegant by applying Kahan's algorithm to improve accuracy dramatically for both versions. The quality of random numbers in a parallel implementation is very important for some simulations. Some practical experience with generating parallel random numbers by offsetting the seed of each random sequence according to the processor ID will be reported.

  19. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2016-06-01

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.

  20. Statistical comparison between experiments and numerical simulations of shock-accelerated gas cylinders

    SciTech Connect

    Rider, William; Kamm, J. R.; Zoldi, C. A.; Tomkins, C. D.

    2002-01-01

    We present detailed spatial analysis comparing experimental data and numerical simulation results for Richtmyer-Meshkov instability experiments of Prestridge et al. and Tomkins et al. These experiments consist, respectively, of one and two diffuse cylinders of sulphur hexafluoride (SF{sub 6}) impulsively accelerated by a Mach 1.2 shockwave in air. The subsequent fluid evolution and mixing is driven by the deposition of baroclinic vorticity at the interface between the two fluids. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods, including a new weighted adaptive Runge-Kutta (WARK) scheme. We quantify the nature of the mixing using using integral measures as well as fractal analysis and continuous wavelet transforms. Our investigation of the gas cylinder configurations follows the path of our earlier studies of the geometrically and dynamically more complex gas 'curtain' experiment. In those studies, we found significant discrepancies in the details of the experimentally measured mixing and the details of the numerical simulations. Here we evaluate the effects of these hydrodynamic integration techniques on the diffuse gas cylinder simulations, which we quantitatively compare with experimental data.

  1. Recent results and future challenges for large scale Particle-In-Cell simulations of plasma-based accelerator concepts

    SciTech Connect

    Huang, C.; An, W.; Decyk, V.K.; Lu, W.; Mori, W.B.; Tsung, F.S.; Tzoufras, M.; Morshed, S.; Antomsen, T.; Feng, B.; Katsouleas, T; Fonseca, R.A.; Martins, S.F.; Vieira, J.; Silva, L.O.; Geddes, C.G.R.; Cormier-Michel, E; Vay, J.-L.; Esarey, E.; Leemans, W.P.; Bruhwiler, D.L.; Cowan, B.; Cary, J.R.; Paul, K.

    2009-05-01

    The concept and designs of plasma-based advanced accelerators for high energy physics and photon science are modeled in the SciDAC COMPASS project with a suite of Particle-In-Cell codes and simulation techniques including the full electromagnetic model, the envelope model, the boosted frame approach and the quasi-static model. In this paper, we report the progress of the development of these models and techniques and present recent results achieved with large-scale parallel PIC simulations. The simulation needs for modeling the plasma-based advanced accelerator at the energy frontier is discussed and a path towards this goal is outlined.

  2. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  3. On-X Heart Valve Prosthesis: Numerical Simulation of Hemodynamic Performance in Accelerating Systole.

    PubMed

    Mirkhani, Nima; Davoudi, Mohammad Reza; Hanafizadeh, Pedram; Javidi, Daryoosh; Saffarian, Niloofar

    2016-09-01

    Numerical simulation of the bileaflet mechanical heart valves (BMHVs) has been of interest for many researchers due to its capability of predicting hemodynamic performance. A lot of studies have tried to simulate this three-dimensional complex flow in order to analyze the effect of different valve designs on the blood flow pattern. However, simplified models and prescribed motion for the leaflets were utilized. In this paper, transient complex blood flow in the location of ascending aorta has been investigated in a realistic model by fully coupled simulation. Geometry model for the aorta and the replaced valve is constructed based on the medical images and extracted point clouds. A 23-mm On-X Medical BMHV as the new generation design has been selected for the flow field analysis. The two-way coupling simulation is conducted throughout the accelerating phase in order to obtain valve dynamics in the opening process. The complex flow field in the hinge recess is captured precisely for all leaflet positions and recirculating zones and elevated shear stress areas have been observed. Results indicate that On-X valve yields relatively less transvalvular pressure gradient which would lower cardiac external work. Furthermore, converging inlet leads to a more uniform flow and consequently less turbulent eddies. However, the leaflets cannot open fully due to middle diffuser-shaped orifice. In addition, asymmetric butterfly-shaped hinge design and converging orifice leads to better hemodynamic performance. With the help of two-way fluid solid interaction simulation, leaflet angle follows the experimental trends more precisely rather than the prescribed motion in previous 3D simulations. PMID:27164902

  4. Convergence acceleration for partitioned simulations of the fluid-structure interaction in arteries

    NASA Astrophysics Data System (ADS)

    Radtke, Lars; Larena-Avellaneda, Axel; Debus, Eike Sebastian; Düster, Alexander

    2016-06-01

    We present a partitioned approach to fluid-structure interaction problems arising in analyses of blood flow in arteries. Several strategies to accelerate the convergence of the fixed-point iteration resulting from the coupling of the fluid and the structural sub-problem are investigated. The Aitken relaxation and variants of the interface quasi-Newton -least-squares method are applied to different test cases. A hybrid variant of two well-known variants of the interface quasi-Newton-least-squares method is found to perform best. The test cases cover the typical boundary value problem faced when simulating the fluid-structure interaction in arteries, including a strong added mass effect and a wet surface which accounts for a large part of the overall surface of each sub-problem. A rubber-like Neo Hookean material model and a soft-tissue-like Holzapfel-Gasser-Ogden material model are used to describe the artery wall and are compared in terms of stability and computational expenses. To avoid any kind of locking, high-order finite elements are used to discretize the structural sub-problem. The finite volume method is employed to discretize the fluid sub-problem. We investigate the influence of mass-proportional damping and the material model chosen for the artery on the performance and stability of the acceleration strategies as well as on the simulation results. To show the applicability of the partitioned approach to clinical relevant studies, the hemodynamics in a pathologically deformed artery are investigated, taking the findings of the test case simulations into account.

  5. A Case Study of Truncated Electrostatics for Simulation of Polyelectrolyte Brushes on GPU Accelerators

    SciTech Connect

    Nguyen, Trung D; Carrillo, Jan-Michael; Dobrynin, Andrey; Brown, W Michael

    2013-01-01

    Numerous issues have disrupted the trend for increasing computational performance with faster CPU clock frequencies. In order to exploit the potential performance of new computers, it is becoming increasingly desirable to re-evaluate computational physics methods and models with an eye towards towards approaches that allow for increased concurrency and data locality. The evaluation of long-range Coulombic interactions is a common bottleneck for molecular dynamics simulations. Enhanced truncation approaches have been proposed as an alternative method and are particularly well suited for many-core architectures and GPUs due to the inherent fine-grain parallelism that can be exploited. In this paper, we compare efficient truncation-based approximations to evaluation of electrostatic forces with the more traditional particle-particle particle-mesh (P3M) method for molecular dynamics simulation of polyelectrolyte brush layers. We show that with the use of GPU accelerators, large parallel simulations using P3M can be greater than 3 times faster due to a reduction in the mesh-size required. Alternatively, using a truncation-based scheme can improve performance even further. This approach can be up to 3.9 times faster than GPU-accelerated P3M for many polymer systems and results in accurate calculation of shear velocities and disjoining pressures for brush layers. For configurations with highly non-uniform charge distributions, however, we find that it is more efficient to use P3M; for these systems, computationally efficient parameterizations of the truncation-based approach do not produce accurate counterion density profiles or brush morphologies.

  6. Convergence acceleration for partitioned simulations of the fluid-structure interaction in arteries

    NASA Astrophysics Data System (ADS)

    Radtke, Lars; Larena-Avellaneda, Axel; Debus, Eike Sebastian; Düster, Alexander

    2016-02-01

    We present a partitioned approach to fluid-structure interaction problems arising in analyses of blood flow in arteries. Several strategies to accelerate the convergence of the fixed-point iteration resulting from the coupling of the fluid and the structural sub-problem are investigated. The Aitken relaxation and variants of the interface quasi-Newton -least-squares method are applied to different test cases. A hybrid variant of two well-known variants of the interface quasi-Newton-least-squares method is found to perform best. The test cases cover the typical boundary value problem faced when simulating the fluid-structure interaction in arteries, including a strong added mass effect and a wet surface which accounts for a large part of the overall surface of each sub-problem. A rubber-like Neo Hookean material model and a soft-tissue-like Holzapfel-Gasser-Ogden material model are used to describe the artery wall and are compared in terms of stability and computational expenses. To avoid any kind of locking, high-order finite elements are used to discretize the structural sub-problem. The finite volume method is employed to discretize the fluid sub-problem. We investigate the influence of mass-proportional damping and the material model chosen for the artery on the performance and stability of the acceleration strategies as well as on the simulation results. To show the applicability of the partitioned approach to clinical relevant studies, the hemodynamics in a pathologically deformed artery are investigated, taking the findings of the test case simulations into account.

  7. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  8. Particle in Cell Simulations of the Pulsar Y-Point -- Nature of the Accelerating Electric Field

    NASA Astrophysics Data System (ADS)

    Belyaev, Mikhail

    2016-06-01

    Over the last decade, satellite observations have yielded a wealth of data on pulsed high-energy emission from pulsars. Several different models have been advanced to fit this data, all of which “paint” the emitting region onto a different portion of the magnetosphere.In the last few years, particle in cell simulations of pulsar magnetospheres have reached the point where they are able to self-consistently model particle acceleration and dissipation. One of the key findings of these simulations is that the region of the current sheet in and around the Y-point provides the highest rate of dissipation of Poynting flux (Belyaev 2015a). On the basis of this physical evidence, it is quite plausible that this region should be associated with the pulsed high energy emission from pulsars. We present high resolution PIC simulations of an axisymmetric pulsar magnetosphere, which are run using PICsar (Belyaev 2015b). These simulations focus on the particle dynamics and electric fields in and around the Y-point region. We run two types of simulations -- first, a force-free magnetosphere and second, a magnetosphere with a gap between the return current layer and the outflowing plasma in the polar wind zone. The latter setup is motivated by studies of pair production with general relativity (Philippov et al. 2015, Belyaev & Parfrey (in preparation)). In both cases, we find that the Y-point and the current sheet in its direct vicinity act like an “electric particle filter” outwardly accelerating particles of one sign of charge while returning the other sign of charge back to the pulsar. We argue that this is a natural behavior of the plasma as it tries to adjust to a solution that is as close to force-free as possible. As a consequence, a large E dot J develops in the vicinity of the Y-point leading to dissipation of Poynting flux. Our work is relevant for explaining the plasma physical mechanisms underlying pulsed high energy emission from pulsars.

  9. Start-to-end beam dynamics simulation of double triangular current profile generation in Argonne Wakefield Accelerator

    SciTech Connect

    Ha, G.; Power, J.; Kim, S. H.; Gai, W.; Kim, K.-J.; Cho, M. H.; Namkung, W.

    2012-12-21

    Double triangular current profile (DT) gives a high transformer ratio which is the determining factor of the performance of collinear wakefield accelerator. This current profile can be generated using the emittance exchange (EEX) beam line. Argonne Wakefield Accelerator (AWA) facility plans to generate DT using the EEX beam line. We conducted start-to-end simulation for the AWA beam line using PARMELA code. Also, we discuss requirements of beam parameters for the generation of DT.

  10. Simulations of ion acceleration at non-relativistic shocks. III. Particle diffusion

    SciTech Connect

    Caprioli, D.; Spitkovsky, A.

    2014-10-10

    We use large hybrid (kinetic-protons-fluid-electrons) simulations to investigate the transport of energetic particles in self-consistent electromagnetic configurations of collisionless shocks. In previous papers of this series, we showed that ion acceleration may be very efficient (up to 10%-20% in energy), and outlined how the streaming of energetic particles amplifies the upstream magnetic field. Here, we measure particle diffusion around shocks with different strengths, finding that the mean free path for pitch-angle scattering of energetic ions is comparable with their gyroradii calculated in the self-generated turbulence. For moderately strong shocks, magnetic field amplification proceeds in the quasi-linear regime, and particles diffuse according to the self-generated diffusion coefficient, i.e., the scattering rate depends only on the amount of energy in modes with wavelengths comparable with the particle gyroradius. For very strong shocks, instead, the magnetic field is amplified up to non-linear levels, with most of the energy in modes with wavelengths comparable to the gyroradii of highest-energy ions, and energetic particles experience Bohm-like diffusion in the amplified field. We also show how enhanced diffusion facilitates the return of energetic particles to the shock, thereby determining the maximum energy that can be achieved in a given time via diffusive shock acceleration. The parameterization of the diffusion coefficient that we derive can be used to introduce self-consistent microphysics into large-scale models of cosmic ray acceleration in astrophysical sources, such as supernova remnants and clusters of galaxies.

  11. Shock experiments and numerical simulations on low energy portable electrically exploding foil accelerators

    NASA Astrophysics Data System (ADS)

    Saxena, A. K.; Kaushik, T. C.; Gupta, Satish C.

    2010-03-01

    Two low energy (1.6 and 8 kJ) portable electrically exploding foil accelerators are developed for moderately high pressure shock studies at small laboratory scale. Projectile velocities up to 4.0 km/s have been measured on Kapton flyers of thickness 125 μm and diameter 8 mm, using an in-house developed Fabry-Pérot velocimeter. An asymmetric tilt of typically few milliradians has been measured in flyers using fiber optic technique. High pressure impact experiments have been carried out on tantalum, and aluminum targets up to pressures of 27 and 18 GPa, respectively. Peak particle velocities at the target-glass interface as measured by Fabry-Pérot velocimeter have been found in good agreement with the reported equation of state data. A one-dimensional hydrodynamic code based on realistic models of equation of state and electrical resistivity has been developed to numerically simulate the flyer velocity profiles. The developed numerical scheme is validated against experimental and simulation data reported in literature on such systems. Numerically computed flyer velocity profiles and final flyer velocities have been found in close agreement with the previously reported experimental results with a significant improvement over reported magnetohydrodynamic simulations. Numerical modeling of low energy systems reported here predicts flyer velocity profiles higher than experimental values, indicating possibility of further improvement to achieve higher shock pressures.

  12. DOE accelerated strategic computing initiative: challenges and opportunities for predictive materials simulation capabilities

    SciTech Connect

    Mailhiot, C.

    1997-10-01

    In response to the unprecedented national security challenges derived from the end of nuclear testing, the Defense Programs of the Department of Energy has developed a long-term strategic plan based on a vigorous Science-Based Stockpile Stewardship (SBSS) program. The main objective of the SBSS program is to ensure confidence in the performance, safety, and reliability of the stockpile on the basis of a fundamental science-based approach. A central element of this approach is the development of predictive, full-physics, full-scale computer simulation tools. As a critical component of the SBSS program, the Accelerated Strategic Computing Initiative (ASCI) was established to provide the required advances in computer platforms and to enable predictive, physics-based simulation technologies. Foremost among the key elements needed to develop predictive simulation capabilities, the development of improved physics-based materials models has been universally identified as one of the highest-priority, highest-leverage activity. We indicate some of the materials modeling issues of relevance to stockpile materials and illustrate how the ASCI program will enable the tools necessary to advance the state-of-the-art in the field of computational condensed matter and materials physics.

  13. Accelerating atomistic simulations through self-learning bond-boost hyperdynamics

    SciTech Connect

    Perez, Danny; Voter, Arthur F

    2008-01-01

    By altering the potential energy landscape on which molecular dynamics are carried out, the hyperdynamics method of Voter enables one to significantly accelerate the simulation state-to-state dynamics of physical systems. While very powerful, successful application of the method entails solving the subtle problem of the parametrization of the so-called bias potential. In this study, we first clarify the constraints that must be obeyed by the bias potential and demonstrate that fast sampling of the biased landscape is key to the obtention of proper kinetics. We then propose an approach by which the bond boost potential of Miron and Fichthorn can be safely parametrized based on data acquired in the course of a molecular dynamics simulation. Finally, we introduce a procedure, the Self-Learning Bond Boost method, in which the parametrization is step efficiently carried out on-the-fly for each new state that is visited during the simulation by safely ramping up the strength of the bias potential up to its optimal value. The stability and accuracy of the method are demonstrated.

  14. Numerical simulations of the flow field ahead of an accelerating flame in an obstructed channel

    NASA Astrophysics Data System (ADS)

    Johansen, C.; Ciccarelli, G.

    2010-07-01

    The development of the unburned gas flow field ahead of a flame front in an obstructed channel was investigated using large eddy simulation (LES). The standard Smagorinsky-Lilly and dynamic Smagorinsky-Lilly subgrid models were used in these simulations. The geometry is essentially two-dimensional. The fence-type obstacles were placed on the top and bottom surfaces of a square cross-section channel, equally spaced along the channel length at the channel height. The laminar rollup of a vortex downstream of each obstacle, transition to turbulence, and growth of a recirculation zone between consecutive obstacles were observed in the simulations. By restricting the simulations to the early stages of the flame acceleration and by varying the domain width and domain length, the three-dimensionality of the vortex rollup process was investigated. It was found that initially the rollup process was two-dimensional and unaffected by the domain length and width. As the recirculation zone grew to fill the streamwise gap between obstacles, the length and width of the computational domain started to affect the simulation results. Three-dimensional flow structures formed within the shear layer, which was generated near the obstacle tips, and the core flow was affected by large-scale turbulence. The simulation predictions were compared to experimental schlieren images of the convection of helium tracer. The development of recirculation zones resulted in the formation of contraction and expansion regions near the obstacles, which significantly affected the centerline gas velocity. Oscillations in the centerline unburned gas velocity were found to be the dominate cause for the experimentally observed early flame-tip velocity oscillations. At later simulation times, regular oscillations in the unburned streamwise gas velocity were not observed, which is contrary to the experimental evidence. This suggests that fluctuations in the burning rate might be the source of the late flame

  15. Benchmarked Simulations of Slow Capillary Discharges for Laser-Plasma Accelerators

    NASA Astrophysics Data System (ADS)

    Johnson, Jeffrey; Colella, Phillip; Geddes, Cameron; Mittelberger, Daniel; Bulanov, Stepan; Esarey, Eric; Leemans, Wim; Applied Numerical Algorithms Group (Lbl) Team; Loasis Laboratory (Lbl) Team

    2011-10-01

    We report our progress on a non-equilibrium, 2-temperature plasma model used for slow capillary discharges pertinent to laser-plasma accelerators. In these experiments, energy transport plays a major role in the formation of a plasma channel, which is used to guide the laser and enhance acceleration. We describe a series of simulations used to study the effects of electrical and thermal conduction, diffusion, and externally-applied magnetic fields in present and ongoing experiments with relevant geometries and densities. Scylla, a 1D cylindrical plasma/hydro code, was used to explore transport models and to resolve the radial profile of the plasma within the capillary. It has also been benchmarked against existing codes and experimental data. Since the capillary has 3D features such as gas feed slots, we have begun implementing a multi-dimensional AMR plasma model that solves the governing equations on irregular domains. Application to the BELLA Project at LBNL will be discussed. This work was supported by the Department of En- ergy under contract number DE-AC02-05-CH11231.

  16. Merging metadynamics into hyperdynamics: accelerated molecular simulations reaching time scales from microseconds to seconds.

    PubMed

    Bal, Kristof M; Neyts, Erik C

    2015-10-13

    The hyperdynamics method is a powerful tool to simulate slow processes at the atomic level. However, the construction of an optimal hyperdynamics potential is a task that is far from trivial. Here, we propose a generally applicable implementation of the hyperdynamics algorithm, borrowing two concepts from metadynamics. First, the use of a collective variable (CV) to represent the accelerated dynamics gives the method a very large flexibility and simplicity. Second, a metadynamics procedure can be used to construct a suitable history-dependent bias potential on-the-fly, effectively turning the algorithm into a self-learning accelerated molecular dynamics method. This collective variable-driven hyperdynamics (CVHD) method has a modular design: both the local system properties on which the bias is based, as well as the characteristics of the biasing method itself, can be chosen to match the needs of the considered system. As a result, system-specific details are abstracted from the biasing algorithm itself, making it extremely versatile and transparent. The method is tested on three model systems: diffusion on the Cu(001) surface and nickel-catalyzed methane decomposition, as examples of “reactive” processes with a bond-length-based CV, and the folding of a long polymer-like chain, using a set of dihedral angles as a CV. Boost factors up to 109, corresponding to a time scale of seconds, could be obtained while still accurately reproducing correct dynamics. PMID:26889516

  17. Accelerated electronic structure-based molecular dynamics simulations of shock-induced chemistry

    NASA Astrophysics Data System (ADS)

    Cawkwell, Marc

    2015-06-01

    The initiation and progression of shock-induced chemistry in organic materials at moderate temperatures and pressures are slow on the time scales available to regular molecular dynamics simulations. Accessing the requisite time scales is particularly challenging if the interatomic bonding is modeled using accurate yet expensive methods based explicitly on electronic structure. We have combined fast, energy conserving extended Lagrangian Born-Oppenheimer molecular dynamics with the parallel replica accelerated molecular dynamics formalism to study the relatively sluggish shock-induced chemistry of benzene around 13-20 GPa. We model interatomic bonding in hydrocarbons using self-consistent tight binding theory with an accurate and transferable parameterization. Shock compression and its associated transient, non-equilibrium effects are captured explicitly by combining the universal liquid Hugoniot with a simple shrinking-cell boundary condition. A number of novel methods for improving the performance of reactive electronic structure-based molecular dynamics by adapting the self-consistent field procedure on-the-fly will also be discussed. The use of accelerated molecular dynamics has enabled us to follow the initial stages of the nucleation and growth of carbon clusters in benzene under thermodynamic conditions pertinent to experiments.

  18. High-resolution simulations of non-Boussinesq downslope gravity currents in the acceleration phase

    NASA Astrophysics Data System (ADS)

    Dai, Albert; Huang, Yu-lin

    2016-02-01

    Gravity currents generated from an instantaneous buoyancy source of density contrast in the density ratio range of 0.3 ≤ γ ≤ 0.998 propagating downslope in the slope angle range of 0° ≤ θ < 90° have been investigated in the acceleration phase by means of high-resolution two-dimensional simulations of the incompressible variable-density Navier-Stokes equations. For all density contrasts considered in this study, front velocity history shows that, after the heavy fluid is released from rest, the gravity currents go through the acceleration phase, reaching a maximum front velocity Uf,max, followed by the deceleration phase. It is found that Uf,max increases as the density contrast increases and such a relationship is, for the first time, quantitatively described by the improved thermal theory considering the non-Boussinesq effects. Energy budgets show that, as the density contrast increases, the heavy fluid retains more fraction of potential energy loss while the ambient fluid receives less fraction of potential energy loss in the process of energy transfer during the propagation of downslope gravity currents. Previously, it was reported that for the Boussinesq case, the downslope gravity currents have a maximum of Uf,max at θ ≈ 40°. It is found, as is also confirmed by the energy budgets in this study, that the slope angle at which the downslope gravity currents have a maximum of Uf,max may increase beyond 40° as the density contrast increases.

  19. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    NASA Astrophysics Data System (ADS)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  20. Emotional states of drivers and the impact on speed, acceleration and traffic violations - a simulator study.

    PubMed

    Roidl, Ernst; Frehse, Berit; Höger, Rainer

    2014-09-01

    Maladjusted driving, such as aggressive driving and delayed reactions, is seen as one cause of traffic accidents. Such behavioural patterns could be influenced by strong emotions in the driver. The causes of emotions in traffic are divided into two distinct classes: personal factors and properties of the specific driving situation. In traffic situations, various appraisal factors are responsible for the nature and intensity of experienced emotions. These include whether another driver was accountable, whether goals were blocked and whether progress and safety were affected. In a simulator study, seventy-nine participants took part in four traffic situations which each elicited a different emotion. Each situation had critical elements (e.g. slow car, obstacle on the street) based on combinations of the appraisal factors. Driving parameters such as velocity, acceleration, and speeding, together with the experienced emotions, were recorded. Results indicate that anger leads to stronger acceleration and higher speeds even for 2 km beyond the emotion-eliciting event. Anxiety and contempt yielded similar but weaker effects, yet showed the same negative and dangerous driving pattern as anger. Fright correlated with stronger braking momentum and lower speeds directly after the critical event. PMID:24836476

  1. Beam quality simulation of the Boeing photoinjector accelerator for the MCTD project

    NASA Astrophysics Data System (ADS)

    Takeda, Harunori; Davis, Keith; Delo, Lance

    1991-07-01

    We present a performance study of the photoinjector accelerator installed at Boeing Corp., Seattle, for the Modular Component Technology Development (MCTD) program. This 5 MeV injector operates at 433 MHz and is designed to produce a normalized emittance less than 100π mm mrad. This study was performed using the PARMELA simulation code. We study parametrically the dependence of the beam emittance on the magnetic fields produced by beam-guiding coils and by the gap coil located immediately after the first injector cavity. We also study the effect of phasing between cavities and the bunched electron beam. In addition to considering the parameters that determine the electron beam environment, we consider the space-charge effect on the bunched beam at higher charge.

  2. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  3. Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.

    PubMed

    Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S

    2016-01-01

    Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays. PMID:26474209

  4. Current status of MCNP6 as a simulation tool useful for space and accelerator applications

    SciTech Connect

    Mashnik, Stepan G; Bull, Jeffrey S; Hughes, H. Grady; Prael, Richard E; Sierk, Arnold J

    2012-07-20

    For the past several years, a major effort has been undertaken at Los Alamos National Laboratory (LANL) to develop the transport code MCNP6, the latest LANL Monte-Carlo transport code representing a merger and improvement of MCNP5 and MCNPX. We emphasize a description of the latest developments of MCNP6 at higher energies to improve its reliability in calculating rare-isotope production, high-energy cumulative particle production, and a gamut of reactions important for space-radiation shielding, cosmic-ray propagation, and accelerator applications. We present several examples of validation and verification of MCNP6 compared to a wide variety of intermediate- and high-energy experimental data on reactions induced by photons, mesons, nucleons, and nuclei at energies from tens of MeV to about 1 TeV/nucleon, and compare to results from other modern simulation tools.

  5. Accelerated equilibrium core composition search using a new MCNP-based simulator

    NASA Astrophysics Data System (ADS)

    Seifried, Jeffrey E.; Gorman, Phillip M.; Vujic, Jasmina L.; Greenspan, Ehud

    2014-06-01

    MocDown is a new Monte Carlo depletion and recycling simulator which couples neutron transport with MCNP and transmutation with ORIGEN. This modular approach to depletion allows for flexible operation by incorporating the accelerated progression of a complex fuel processing scheme towards equilibrium and by allowing for the online coupling of thermo-fluids feedback. MocDown also accounts for the variation of decay heat with fuel isotopics evolution. In typical cases, MocDown requires just over a day to find the equilibrium core composition for a multi-recycling fuel cycle, with a self-consistent thermo-fluids solution-a task that required between one and two weeks using previous Monte Carlo-based approaches.

  6. Optimization of accelerator target and detector for portal imaging using Monte Carlo simulation and experiment

    NASA Astrophysics Data System (ADS)

    Flampouri, S.; Evans, P. M.; Verhaegen, F.; Nahum, A. E.; Spezi, E.; Partridge, M.

    2002-09-01

    Megavoltage portal images suffer from poor quality compared to those produced with kilovoltage x-rays. Several authors have shown that the image quality can be improved by modifying the linear accelerator to generate more low-energy photons. This work addresses the problem of using Monte Carlo simulation and experiment to optimize the beam and detector combination to maximize image quality for a given patient thickness. A simple model of the whole imaging chain was developed for investigation of the effect of the target parameters on the quality of the image. The optimum targets (6 mm thick aluminium and 1.6 mm copper) were installed in an Elekta SL25 accelerator. The first beam will be referred to as Al6 and the second as Cu1.6. A tissue-equivalent contrast phantom was imaged with the 6 MV standard photon beam and the experimental beams with standard radiotherapy and mammography film/screen systems. The arrangement with a thin Al target/mammography system improved the contrast from 1.4 cm bone in 5 cm water to 19% compared with 2% for the standard arrangement of a thick, high-Z target/radiotherapy verification system. The linac/phantom/detector system was simulated with the BEAM/EGS4 Monte Carlo code. Contrast calculated from the predicted images was in good agreement with the experiment (to within 2.5%). The use of MC techniques to predict images accurately, taking into account the whole imaging system, is a powerful new method for portal imaging system design optimization.

  7. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663

  8. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator

    PubMed Central

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm2× 10 cm2 filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804

  9. Simulation of launch and re-entry acceleration profiles for testing of Shuttle and unmanned microgravity research payloads

    NASA Technical Reports Server (NTRS)

    Cassanto, J. M.; Ziserman, H. I.; Chapman, D. K.; Korszun, Z. R.; Todd, D. K.

    1988-01-01

    A procedure was developed for the simulation of the launch and reentry acceleration profiles of the Space Shuttle (3.3 and 1.7 g maximum, respectively) and of two versions of NASA's proposed materials research Reusable Reentry Satellite (RRS) (8 and 4 g maximum, respectively). With a 7-m centrifuge, the time dependence of five different acceleration episodes was simulated for payload masses up to 59 kg. Test results obtained for the Materials Dispersion Apparatus, a commercial low-cost payload device, are presented.

  10. Progress towards the development of transient ram accelerator simulation as part of the U.S. Air Force Armament Directorate Research Program

    NASA Astrophysics Data System (ADS)

    Sinha, N.; York, B. J.; Dash, S. M.; Drabczuk, R.; Rolader, G. E.

    1992-07-01

    This paper describes the development of an advanced CFD simulation capability in support of the U.S. Air Force Armament Directorate's ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, high pressure thermochemistry, etc. Selected ram accelerator simulations are presented which serve to exhibit the CRAFT code's capabilities and identify some of the principal research/design issues.

  11. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    NASA Astrophysics Data System (ADS)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  12. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    PubMed

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state. PMID:27497533

  13. Acceleration of 3D Finite Difference AWP-ODC for seismic simulation on GPU Fermi Architecture

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Cui, Y.; Choi, D.

    2011-12-01

    AWP-ODC, a highly scalable parallel finite-difference application, enables petascale 3D earthquake calculations. This application generates realistic dynamic earthquake source description and detailed physics-based anelastic ground motions at frequencies pertinent to safe building design. In 2010, the code achieved M8, a full dynamical simulation of a magnitude-8 earthquake on the southern San Andreas fault up to 2-Hz, the largest-ever earthquake simulation. Building on the success of the previous work, we have implemented CUDA on AWP-ODC to accelerate wave propagation on GPU platform. Our CUDA development aims on aggressive parallel efficiency, optimized global and shared memory access to make the best use of GPU memory hierarchy. The benchmark on NVIDIA Tesla C2050 graphics cards demonstrated many tens of speedup in single precision compared to serial implementation at a testing problem size, while an MPI-CUDA implementation is in the progress to extend our solver to multi-GPU clusters. Our CUDA implementation has been carefully verified for accuracy.

  14. Particle Acceleration and the Origin of X-Ray Flares in GRMHD Simulations of SGR A

    NASA Astrophysics Data System (ADS)

    Ball, David; Özel, Feryal; Psaltis, Dimitrios; Chan, Chi-kwan

    2016-07-01

    Significant X-ray variability and flaring has been observed from Sgr A* but is poorly understood from a theoretical standpoint. We perform general relativistic magnetohydrodynamic simulations that take into account a population of non-thermal electrons with energy distributions and injection rates that are motivated by PIC simulations of magnetic reconnection. We explore the effects of including these non-thermal electrons on the predicted broadband variability of Sgr A* and find that X-ray variability is a generic result of localizing non-thermal electrons to highly magnetized regions, where particles are likely to be accelerated via magnetic reconnection. The proximity of these high-field regions to the event horizon forms a natural connection between IR and X-ray variability and accounts for the rapid timescales associated with the X-ray flares. The qualitative nature of this variability is consistent with observations, producing X-ray flares that are always coincident with IR flares, but not vice versa, i.e., there are a number of IR flares without X-ray counterparts.

  15. Envelope Model Simulation of Laser Wakefield Acceleration with Realistic Laser Pulses from the Texas Petawatt

    NASA Astrophysics Data System (ADS)

    Weichman, Kathleen; Higuera, Adam; Abell, Dan; Cowan, Ben; Fazel, Neil; Cary, John; Downer, Michael

    2015-11-01

    In a laser wakefield accelerator (LWFA), diffraction of an over-focused laser pulse can provide localized electron injection, leading to the production of a monoenergetic electron bunch. While electron energies up to several GeV have been reported at the Texas Petawatt Laser facility, near-Gaussian beam simulations predict energies higher than have been observed. Experimentally measured laser profiles are non-Gaussian, indicating that closer agreement with experimental conditions is needed to predictively model this experiment. The implementation of the envelope model in the particle-in-cell code VORPAL lowers the computational cost of capturing injection dynamics during the early evolution of laser wakefields. We compare VORPAL envelope model simulations using laser pulses based on experimentally measured profiles versus a corresponding a two-Gaussian approximation. We acknowledge DOE Grants No. DE-SC0011617 and DE-SC0012444, DOE/NSF Grant No. DE-SC0012584, and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. KW is supported by the DOE CSGF under Grant No. DE-FG02-97ER25308.

  16. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Salvadore, Francesco; Bernardini, Matteo; Botti, Michela

    2013-02-01

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier-Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  17. Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations

    SciTech Connect

    Liu, Y; Grossman, JC

    2014-12-01

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.

  18. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    SciTech Connect

    Salvadore, Francesco; Botti, Michela

    2013-02-15

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier–Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  19. Accelerating groundwater flow simulation in MODFLOW using JASMIN-based parallel computing.

    PubMed

    Cheng, Tangpei; Mo, Zeyao; Shao, Jingli

    2014-01-01

    To accelerate the groundwater flow simulation process, this paper reports our work on developing an efficient parallel simulator through rebuilding the well-known software MODFLOW on JASMIN (J Adaptive Structured Meshes applications Infrastructure). The rebuilding process is achieved by designing patch-based data structure and parallel algorithms as well as adding slight modifications to the compute flow and subroutines in MODFLOW. Both the memory requirements and computing efforts are distributed among all processors; and to reduce communication cost, data transfers are batched and conveniently handled by adding ghost nodes to each patch. To further improve performance, constant-head/inactive cells are tagged and neglected during the linear solving process and an efficient load balancing strategy is presented. The accuracy and efficiency are demonstrated through modeling three scenarios: The first application is a field flow problem located at Yanming Lake in China to help design reasonable quantity of groundwater exploitation. Desirable numerical accuracy and significant performance enhancement are obtained. Typically, the tagged program with load balancing strategy running on 40 cores is six times faster than the fastest MICCG-based MODFLOW program. The second test is simulating flow in a highly heterogeneous aquifer. The AMG-based JASMIN program running on 40 cores is nine times faster than the GMG-based MODFLOW program. The third test is a simplified transient flow problem with the order of tens of millions of cells to examine the scalability. Compared to 32 cores, parallel efficiency of 77 and 68% are obtained on 512 and 1024 cores, respectively, which indicates impressive scalability. PMID:23600445

  20. The design of a simulated in-line side-coupled 6 MV linear accelerator waveguide

    SciTech Connect

    St Aubin, Joel; Steciw, Stephen; Fallone, B. G.

    2010-02-15

    Purpose: The design of a 3D in-line side-coupled 6 MV linac waveguide for medical use is given, and the effect of the side-coupling and port irises on the radio frequency (RF), beam dynamics, and dosimetric solutions is examined. This work was motivated by our research on a linac-MR hybrid system, where accurate electron trajectory information for a clinical medical waveguide in the presence of an external magnetic field was needed. Methods: For this work, the design of the linac waveguide was generated using the finite element method. The design outlined here incorporates the necessary geometric changes needed to incorporate a full-end accelerating cavity with a single-coupling iris, a waveguide-cavity coupling port iris that allows power transfer into the waveguide from the magnetron, as well as a method to control the RF field magnitude within the first half accelerating cavity into which the electrons from the gun are injected. Results: With the full waveguide designed to resonate at 2998.5{+-}0.1 MHz, a full 3D RF field solution was obtained. The accuracy of the 3D RF field solution was estimated through a comparison of important linac parameters (Q factor, shunt impedance, transit time factor, and resonant frequency) calculated for one accelerating cavity with the benchmarked program SUPERFISH. It was found that the maximum difference between the 3D solution and SUPERFISH was less than 0.03%. The eigenvalue solver, which determines the resonant frequencies of the 3D side-coupled waveguide simulation, was shown to be highly accurate through a comparison with lumped circuit theory. Two different waveguide geometries were examined, one incorporating a 0.5 mm first side cavity shift and another with a 1.5 mm first side cavity shift. The asymmetrically placed side-coupling irises and the port iris for both models were shown to introduce asymmetries in the RF field large enough to cause a peak shift and skewing (center of gravity minus peak shift) of an initially

  1. Advanced Simulation and Optimization Tools for Dynamic Aperture of Non-scaling FFAGs and Accelerators including Modern User Interfaces

    SciTech Connect

    Mills, F.; Makino, Kyoko; Berz, Martin; Johnstone, C.

    2010-09-01

    With the U.S. experimental effort in HEP largely located at laboratories supporting the operations of large, highly specialized accelerators, colliding beam facilities, and detector facilities, the understanding and prediction of high energy particle accelerators becomes critical to the success, overall, of the DOE HEP program. One area in which small businesses can contribute to the ongoing success of the U.S. program in HEP is through innovations in computer techniques and sophistication in the modeling of high-energy accelerators. Accelerator modeling at these facilities is performed by experts with the product generally highly specific and representative only of in-house accelerators or special-interest accelerator problems. Development of new types of accelerators like FFAGs with their wide choices of parameter modifications, complicated fields, and the simultaneous need to efficiently handle very large emittance beams requires the availability of new simulation environments to assure predictability in operation. In this, ease of use and interfaces are critical to realizing a successful model, or optimization of a new design or working parameters of machines. In Phase I, various core modules for the design and analysis of FFAGs were developed and Graphical User Interfaces (GUI) have been investigated instead of the more general yet less easily manageable console-type output COSY provides.

  2. The molecular kink paradigm for rubber elasticity: Numerical simulations of explicit polyisoprene networks at low to moderate tensile strains

    NASA Astrophysics Data System (ADS)

    Hanson, David E.

    2011-08-01

    Based on recent molecular dynamics and ab initio simulations of small isoprene molecules, we propose a new ansatz for rubber elasticity. We envision a network chain as a series of independent molecular kinks, each comprised of a small number of backbone units, and the strain as being imposed along the contour of the chain. We treat chain extension in three distinct force regimes: (Ia) near zero strain, where we assume that the chain is extended within a well defined tube, with all of the kinks participating simultaneously as entropic elastic springs, (II) when the chain becomes sensibly straight, giving rise to a purely enthalpic stretching force (until bond rupture occurs) and, (Ib) a linear entropic regime, between regimes Ia and II, in which a force limit is imposed by tube deformation. In this intermediate regime, the molecular kinks are assumed to be gradually straightened until the chain becomes a series of straight segments between entanglements. We assume that there exists a tube deformation tension limit that is inversely proportional to the chain path tortuosity. Here we report the results of numerical simulations of explicit three-dimensional, periodic, polyisoprene networks, using these extension-only force models. At low strain, crosslink nodes are moved affinely, up to an arbitrary node force limit. Above this limit, non-affine motion of the nodes is allowed to relax unbalanced chain forces. Our simulation results are in good agreement with tensile stress vs. strain experiments.

  3. Experimental and Simulated Characterization of a Beam Shaping Assembly for Accelerator- Based Boron Neutron Capture Therapy (AB-BNCT)

    NASA Astrophysics Data System (ADS)

    Burlon, Alejandro A.; Girola, Santiago; Valda, Alejandro A.; Minsky, Daniel M.; Kreiner, Andrés J.

    2010-08-01

    In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the 7Li(p, n)7Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.

  4. Experimental and Simulated Characterization of a Beam Shaping Assembly for Accelerator- Based Boron Neutron Capture Therapy (AB-BNCT)

    SciTech Connect

    Burlon, Alejandro A.; Valda, Alejandro A.; Girola, Santiago; Minsky, Daniel M.; Kreiner, Andres J.

    2010-08-04

    In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the {sup 7}Li(p, n){sup 7}Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.

  5. Clarifying the Narrative Paradigm.

    ERIC Educational Resources Information Center

    Fisher, Walter R.

    1989-01-01

    Replies to Rowland's article (same issue) on Fisher's views of the narrative paradigm. Clarifies the narrative paradigm by discussing three senses in which "narration" can be understood, and by indicating what the narrative paradigm is not. (SR)

  6. Simulation of launch and re-entry acceleration profiles for testing of shuttle and unmanned microgravity research payloads

    NASA Astrophysics Data System (ADS)

    Cassanto, J. M.; Ziserman, H. I.; Chapman, D. K.; Korszun, Z. R.; Todd, P.

    Microgravity experiments designed for execution in Get-Away Special canisters, Hitchhiker modules, and Reusable Re-entry Satellites will be subjected to launch and re-entry accelerations. Crew-dependent provisions for preventing acceleration damage to equipment or products will not be available for these payloads during flight; therefore, the effects of launch and re-entry accelerations on all aspects of such payloads must be evaluated prior to flight. A procedure was developed for conveniently simulating the launch and re-entry acceleration profiles of the Space Shuttle (3.3 and 1.7 × g maximum, respectively) and of two versions of NASA's proposed materials research Re-usable Re-entry Satellite (8 × g maximum in one case and 4 × g in the other). By using the 7 m centrifuge of the Gravitational Plant Physiology Laboratory in Philadelphia it was found possible to simulate the time dependence of these 5 different acceleration episodes for payload masses up to 59 kg. A commercial low-cost payload device, the “Materials Dispersion Apparatus” of Instrumentation Technology Associates was tested for (1) integrity of mechanical function, (2) retention of fluid in its compartments, and (3) integrity of products under simulated re-entry g-loads. In particular, the sharp rise from 1 g to maximum g-loading that occurs during re-entry in various unmanned vehicles was successfully simulated, conditions were established for reliable functioning of the MDA, and crystals of 5 proteins suspended in compartments filled with mother liquor were subjected to this acceleration load.

  7. Simulation of plasma flows in self-field Lorentz force accelerators

    NASA Astrophysics Data System (ADS)

    Sankaran, Kameshwaran

    2005-07-01

    A characteristics-based scheme for the solution of ideal MHD equations was developed, and its ability to capture time-dependent discontinuities monotonically, as well as maintain force-free equilibrium, was demonstrated. Detailed models of classical transport, real equations of state, multi-level ionization models, anomalous transport, and multi-temperature effects for argon and lithium plasmas were implemented in this code. The entire set of equations was solved on non-orthogonal meshes, using parallel computers, to provide realistic description of flowfields in various thruster configurations. The calculated flowfield in gas-fed magnetoplasmadynamic thrusters (MPDT), such as the full-scale benchmark thruster (FSBT), compared favorably with measurements. These simulations provided insight into some aspects of FSBT operation, such as the weak role of the anode geometry in affecting the coefficient of thrust, the predominantly electromagnetic nature of the thrust at nominal operating conditions, and the importance of the near-cathode region in energy dissipation. Furthermore, the simulated structure of the flow embodied a number of photographically-recorded features of the FSBT discharge. Based on the confidence gained from its success with gas-fed MPDT flows, this code was then used to study a promising high-power spacecraft thruster, the lithium Lorentz force accelerator (LiLFA), in order to uncover its interior plasma properties and to obtain insight into underlying physical processes that had been poorly understood. The simulated flowfields of density, velocity, ionization, and anomalous resistivity were shown to change qualitatively with the total current. The simulations show the presence of a velocity reducing shock at low current, which disappeared as the current was increased above the value corresponding to nominal operation. The breakdown and scaling of the various components of thrust and power were revealed. The line on which the magnetic pressure

  8. Accelerated 20-year sunlight exposure simulation of a photochromic foldable intraocular lens in a rabbit model

    PubMed Central

    Werner, Liliana; Abdel-Aziz, Salwa; Peck, Carolee Cutler; Monson, Bryan; Espandar, Ladan; Zaugg, Brian; Stringham, Jack; Wilcox, Chris; Mamalis, Nick

    2011-01-01

    PURPOSE To assess the long-term biocompatibility and photochromic stability of a new photochromic hydrophobic acrylic intraocular lens (IOL) under extended ultraviolet (UV) light exposure. SETTING John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. DESIGN Experimental study. METHODS A Matrix Aurium photochromic IOL was implanted in right eyes and a Matrix Acrylic IOL without photochromic properties (n = 6) or a single-piece AcrySof Natural SN60AT (N = 5) IOL in left eyes of 11 New Zealand rabbits. The rabbits were exposed to a UV light source of 5 mW/cm2 for 3 hours during every 8-hour period, equivalent to 9 hours a day, and followed for up to 12 months. The photochromic changes were evaluated during slitlamp examination by shining a penlight UV source in the right eye. After the rabbits were humanely killed and the eyes enucleated, study and control IOLs were explanted and evaluated in vitro on UV exposure and studied histopathologically. RESULTS The photochromic IOL was as biocompatible as the control IOLs after 12 months under conditions simulating at least 20 years of UV exposure. In vitro evaluation confirmed the retained optical properties, with photochromic changes observed within 7 seconds of UV exposure. The rabbit eyes had clinical and histopathological changes expected in this model with a 12-month follow-up. CONCLUSIONS The new photochromic IOL turned yellow only on exposure to UV light. The photochromic changes were reversible, reproducible, and stable over time. The IOL was biocompatible with up to 12 months of accelerated UV exposure simulation. PMID:21241924

  9. Earthquake Dynamics in Laboratory Model and Simulation - Accelerated Creep as Precursor of Instabilities

    NASA Astrophysics Data System (ADS)

    Grzemba, B.; Popov, V. L.; Starcevic, J.; Popov, M.

    2012-04-01

    Shallow earthquakes can be considered as a result of tribological instabilities, so called stick-slip behaviour [1,2], meaning that sudden slip occurs at already existing rupture zones. From a contact mechanics point of view it is clear, that no motion can arise completely sudden, the material will always creep in an existing contact in the load direction before breaking loose. If there is a measureable creep before the instability, this could serve as a precursor. To examine this theory in detail, we built up an elementary laboratory model with pronounced stick-slip behaviour. Different material pairings, such as steel-steel, steel-glass and marble-granite, were analysed at different driving force rates. The displacement was measured with a resolution of 8 nm. We were able to show that a measureable accelerated creep precedes the instability. Near the instability, this creep is sufficiently regular to serve as a basis for a highly accurate prediction of the onset of macroscopic slip [3]. In our model a prediction is possible within the last few percents of the preceding stick time. We are hopeful to extend this period. Furthermore, we showed that the slow creep as well as the fast slip can be described very well by the Dieterich-Ruina-friction law, if we include the contribution of local contact rigidity. The simulation meets the experimental curves over five orders of magnitude. This friction law was originally formulated for rocks [4,5] and takes into account the dependency of the coefficient of friction on the sliding velocity and on the contact history. The simulations using the Dieterich-Ruina-friction law back up the observation of a universal behaviour of the creep's acceleration. We are working on several extensions of our model to more dimensions in order to move closer towards representing a full three-dimensional continuum. The first step will be an extension to two degrees of freedom to analyse the interdependencies of the instabilities. We also plan

  10. Simulations of second-order Fermi acceleration of electrons: Solving the injection problem

    SciTech Connect

    Gisler, G.R.

    1991-12-31

    The boosting of electrons from a Maxwellian distribution into a suprathermal power-law tail has long been recognized as an important bottleneck governing the subsequent acceleration of some of these electrons to relativistic energies. This is the seed or injection problem. I study this boosting process using a test-particle simulation code, following the full equations of motion of tens of thousands of electrons chosen from a thermal population as they move through general time-dependent magnetic fields. Inhomogeneities in the magnetic field are provided by finite swarms of moving current loops with Maxwellian velocity distributions and power-law distributions of loop size and dipole moment strength. Whether bulk heating or boosting occurs is found to depend on the size of the swarm thermal speed compared to the electron thermal speed. When the swarm thermal speed is comparable to the electron thermal speed the entire electron population is heated by encounters with the rapidly moving current loops, approximately preserving the Maxwellian character of the electron distribution. On the other hand, at very low swarm thermal speeds there is no bulk heating; instead one percent or fewer of the electrons are boosted into a power-law suprathermal tail with a differential energy spectral index between 1 and 2. Individual boosts of 2000 and more have been observed in samples of 50,000 electrons. Most of the strongly boosted electrons have initial energies that are well below the peak of the initial Maxwellian.

  11. Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Jiang, Xiao-Guo

    2012-09-01

    To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.

  12. Numerical simulations of recent proton acceleration experiments with sub-100 TW laser systems

    NASA Astrophysics Data System (ADS)

    Sinigardi, Stefano

    2016-09-01

    Recent experiments carried out at the Italian National Research Center, National Optics Institute Department in Pisa, are showing interesting results regarding maximum proton energies achievable with sub-100 TW laser systems. While laser systems are being continuously upgraded in laboratories around the world, at the same time a new trend on stabilizing and making ion acceleration results reproducible is growing in importance. Almost all applications require a beam with fixed performance, so that the energy spectrum and the total charge exhibit moderate shot to shot variations. This result is surely far from being achieved, but many paths are being explored in order to reach it. Some of the reasons for this variability come from fluctuations in laser intensity and focusing, due to optics instability. Other variation sources come from small differences in the target structure. The target structure can vary substantially, when it is impacted by the main pulse, due to the prepulse duration and intensity, the shape of the main pulse and the total energy deposited. In order to qualitatively describe the prepulse effect, we will present a two dimensional parametric scan of its relevant parameters. A single case is also analyzed with a full three dimensional simulation, obtaining reasonable agreement between the numerical and the experimental energy spectrum.

  13. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    SciTech Connect

    Grimm, Simon L.; Stadel, Joachim G.

    2014-11-20

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  14. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    SciTech Connect

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor S.

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach was evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.

  15. Comparing transient, accelerated, and equilibrium simulations of the last 30 000 years with the GENIE-1 model

    NASA Astrophysics Data System (ADS)

    Lunt, D. J.; Williamson, M. S.; Valdes, P. J.; Lenton, T. M.

    2006-06-01

    We examine several aspects of the ocean-atmosphere system over the last 30 000 years, by carrying out simulations with prescribed ice-sheets, atmospheric CO2 concentration, and orbital parameters. We use the GENIE-1 model with a geostrophic ocean, dynamic sea-ice, an energy balance atmosphere, and a land-surface scheme with fixed vegetation. A transient simulation, with boundary conditions derived from ice-core records and ice-sheet reconstructions, is compared with equilibrium snapshot simulations, including the Last Glacial Maximum (21 000 years before present; 21 kyrBP), mid-Holocene (6 kyrBP) and pre-industrial. The equilibrium snapshot surface temperatures are all very similar to their corresponding time period in the transient simulation, suggesting that in the last 30 000 years, the ocean-atmosphere system has been close to equilibrium with its boundary conditions. We investigate the method of accelerating the boundary conditions of a transient simulation and find that the Southern Ocean is the region most affected by the acceleration. The Northern Hemisphere, even with a factor of 10 acceleration, is relatively unaffected.

  16. Application of the Reduction of Scale Range in a Lorentz Boosted Frame to the Numerical Simulation of Particle Acceleration Devices

    SciTech Connect

    Vay, J.-L.; Fawley, W.M.; Geddes, C.G.R.; Cormier-Michel, E.; Grote, D.P.

    2009-05-01

    It has been shown [1] that it may be computationally advantageous to perform computer simulations in a boosted frame for a certain class of systems: particle beams interacting with electron clouds, free electron lasers, and laser-plasma accelerators. However, even if the computer model relies on a covariant set of equations, it was also pointed out that algorithmic difficulties related to discretization errors may have to be overcome in order to take full advantage of the potential speedup [2] . In this paper, we focus on the analysis of the complication of data input and output in a Lorentz boosted frame simulation, and describe the procedures that were implemented in the simulation code Warp[3]. We present our most recent progress in the modeling of laser wakefield acceleration in a boosted frame, and describe briefly the potential benefits of calculating in a boosted frame for the modeling of coherent synchrotron radiation.

  17. Fatigue-test acceleration with flight-by-flight loading and heating to simulate supersonic-transport operation

    NASA Technical Reports Server (NTRS)

    Imig, L. A.; Garrett, L. E.

    1973-01-01

    Possibilities for reducing fatigue-test time for supersonic-transport materials and structures were studied in tests with simulated flight-by-flight loading. In order to determine whether short-time tests were feasible, the results of accelerated tests (2 sec per flight) were compared with the results of real-time tests (96 min per flight). The effects of design mean stress, the stress range for ground-air-ground cycles, simulated thermal stress, the number of stress cycles in each flight, and salt corrosion were studied. The flight-by-flight stress sequences were applied to notched sheet specimens of Ti-8Al-1Mo-1V and Ti-6Al-4V titanium alloys. A linear cumulative-damage analysis accounted for large changes in stress range of the simulated flights but did not account for the differences between real-time and accelerated tests. The fatigue lives from accelerated tests were generally within a factor of two of the lives from real-time tests; thus, within the scope of the investigation, accelerated testing seems feasible.

  18. PIC Simulations Of Ion Acceleration By Linearly And Circularly Polarized Laser Pulses

    SciTech Connect

    Limpouch, Jiri; Klimo, Ondrej; Psikal, Jan; Tikhonchuk, Vladimir T.; Kawata, Shigeo; Andreev, Alexander A.

    2008-06-24

    Linearly polarized laser radiation accelerates electrons to very high velocities and these electron form a sheath layer on the rear side of thin targets where preferentially protons are accelerated. When mass-limited targets are used, the lateral transport of the absorbed laser energy is reduced and the accelerating field is enhanced. For targets consisting of two ion species, heavier ions facilitate formation of quasi-monoenergetic bunch of lighter ions. For circularly polarized light, fast electron production is suppressed by the absence of the oscillatory component of the ponderomotive force. Ions are accelerated on the front side by the separation field and very thin foil can be accelerated as one massive quasi-neutral block. As all ion species acquire the same velocity, this acceleration mechanism is preferred for heavier ions.

  19. Magnetogasdynamic compression of a coaxial plasma accelerator flow for micrometeoroid simulation

    NASA Technical Reports Server (NTRS)

    Igenbergs, E. B.; Shriver, E. L.

    1974-01-01

    A new configuration of a coaxial plasma accelerator with self-energized magnetic compressor coil attached is described. It is shown that the circuit may be treated theoretically by analyzing an equivalent circuit mesh. The results obtained from the theoretical analysis compare favorably with the results measured experimentally. Using this accelerator configuration, glass beads of 125 micron diameter were accelerated to velocities as high as 11 kilometers per second, while 700 micron diameter glass beads were accelerated to velocities as high as 5 kilometers per second. The velocities are within the hypervelocity regime of meteoroids.

  20. Design and Simulation of IOTA - a Novel Concept of Integrable Optics Test Accelerator

    SciTech Connect

    Nagaitsev, S.; Valishev, A.; Danilov, V.V.; Shatilov, D.N.; /Novosibirsk, IYF

    2012-05-01

    The use of nonlinear lattices with large betatron tune spreads can increase instability and space charge thresholds due to improved Landau damping. Unfortunately, the majority of nonlinear accelerator lattices turn out to be nonintegrable, producing chaotic motion and a complex network of stable and unstable resonances. Recent advances in finding the integrable nonlinear accelerator lattices have led to a proposal to construct at Fermilab a test accelerator with strong nonlinear focusing which avoids resonances and chaotic particle motion. This presentation will outline the main challenges, theoretical design solutions and construction status of the Integrable Optics Test Accelerator (IOTA) underway at Fermilab.

  1. GEANT4 simulations for beam emittance in a linear collider based on plasma wakefield acceleration

    SciTech Connect

    Mete, O. Xia, G.; Hanahoe, K.; Labiche, M.

    2015-08-15

    Alternative acceleration technologies are currently under development for cost-effective, robust, compact, and efficient solutions. One such technology is plasma wakefield acceleration, driven by either a charged particle or laser beam. However, the potential issues must be studied in detail. In this paper, the emittance evolution of a witness beam through elastic scattering from gaseous media and under transverse focusing wakefields is studied.

  2. Comparing transient, accelerated, and equilibrium simulations of the last 30 000 years with the GENIE-1 model

    NASA Astrophysics Data System (ADS)

    Lunt, D. J.; Williamson, M. S.; Valdes, P. J.; Lenton, T. M.; Marsh, R.

    2006-11-01

    We examine several aspects of the ocean-atmosphere system over the last 30 000 years, by carrying out simulations with prescribed ice sheets, atmospheric CO2 concentration, and orbital parameters. We use the GENIE-1 model with a frictional geostrophic ocean, dynamic sea ice, an energy balance atmosphere, and a land-surface scheme with fixed vegetation. A transient simulation, with boundary conditions derived from ice-core records and ice sheet reconstructions, is compared with equilibrium snapshot simulations, including the Last Glacial Maximum (21 000 years before present; 21 kyrBP), mid-Holocene (6 kyrBP) and pre-industrial. The equilibrium snapshot simulations are all very similar to their corresponding time period in the transient simulation, indicating that over the last 30 000 years, the model's ocean-atmosphere system is close to equilibrium with its boundary conditions. However, our simulations neglect the transfer of fresh water from and to the ocean, resulting from the growth and decay of ice sheets, which would, in reality, lead to greater disequilibrium. Additionally, the GENIE-1 model exhibits a rather limited response in terms of its Atlantic Meridional Overturning Circulation (AMOC) over the 30 000 years; a more sensitive AMOC would also be likely to lead to greater disequilibrium. We investigate the method of accelerating the boundary conditions of a transient simulation and find that the Southern Ocean is the region most affected by the acceleration. The Northern Hemisphere, even with a factor of 10 acceleration, is relatively unaffected. The results are robust to changes to several tunable parameters in the model. They also hold when a higher vertical resolution is used in the ocean.

  3. Assessment of the utility of contact-based restraints in accelerating the prediction of protein structure using molecular dynamics simulations.

    PubMed

    Raval, Alpan; Piana, Stefano; Eastwood, Michael P; Shaw, David E

    2016-01-01

    Molecular dynamics (MD) simulation is a well-established tool for the computational study of protein structure and dynamics, but its application to the important problem of protein structure prediction remains challenging, in part because extremely long timescales can be required to reach the native structure. Here, we examine the extent to which the use of low-resolution information in the form of residue-residue contacts, which can often be inferred from bioinformatics or experimental studies, can accelerate the determination of protein structure in simulation. We incorporated sets of 62, 31, or 15 contact-based restraints in MD simulations of ubiquitin, a benchmark system known to fold to the native state on the millisecond timescale in unrestrained simulations. One-third of the restrained simulations folded to the native state within a few tens of microseconds-a speedup of over an order of magnitude compared with unrestrained simulations and a demonstration of the potential for limited amounts of structural information to accelerate structure determination. Almost all of the remaining ubiquitin simulations reached near-native conformations within a few tens of microseconds, but remained trapped there, apparently due to the restraints. We discuss potential methodological improvements that would facilitate escape from these near-native traps and allow more simulations to quickly reach the native state. Finally, using a target from the Critical Assessment of protein Structure Prediction (CASP) experiment, we show that distance restraints can improve simulation accuracy: In our simulations, restraints stabilized the native state of the protein, enabling a reasonable structural model to be inferred. PMID:26266489

  4. Exploring inhibitor release pathways in histone deacetylases using random acceleration molecular dynamics simulations.

    PubMed

    Kalyaanamoorthy, Subha; Chen, Yi-Ping Phoebe

    2012-02-27

    Molecular channel exploration perseveres to be the prominent solution for eliciting structure and accessibility of active site and other internal spaces of macromolecules. The volume and silhouette characterization of these channels provides answers for the issues of substrate access and ligand swapping between the obscured active site and the exterior of the protein. Histone deacetylases (HDACs) are metal-dependent enzymes that are involved in the cell growth, cell cycle regulation, and progression, and their deregulations have been linked with different types of cancers. Hence HDACs, especially the class I family, are widely recognized as the important cancer targets, and the characterizations of their structures and functions have been of special interest in cancer drug discovery. The class I HDACs are known to possess two different protein channels, an 11 Å and a 14 Å (named channels A and B1, respectively), of which the former is a ligand or substrate occupying tunnel that leads to the buried active site zinc ion and the latter is speculated to be involved in product release. In this work, we have carried out random acceleration molecular dynamics (RAMD) simulations coupled with the classical molecular dynamics to explore the release of the ligand, N-(2-aminophenyl) benzamide (LLX) from the active sites of the recently solved X-ray crystal structure of HDAC2 and the computationally modeled HDAC1 proteins. The RAMD simulations identified significant structural and dynamic features of the HDAC channels, especially the key 'gate-keeping' amino acid residues that control these channels and the ligand release events. Further, this study identified a novel and unique channel B2, a subchannel from channel B1, in the HDAC1 protein structure. The roles of water molecules in the LLX release from the HDAC1 and HDAC2 enzymes are also discussed. Such structural and dynamic properties of the HDAC protein channels that govern the ligand escape reactions will provide

  5. Muscle contributions to centre of mass acceleration during turning gait in typically developing children: A simulation study.

    PubMed

    Dixon, Philippe C; Jansen, Karen; Jonkers, Ilse; Stebbins, Julie; Theologis, Tim; Zavatsky, Amy B

    2015-12-16

    Turning while walking requires substantial joint kinematic and kinetic adaptations compared to straight walking in order to redirect the body centre of mass (COM) towards the new walking direction. The role of muscles and external forces in controlling and redirecting the COM during turning remains unclear. The aim of this study was to compare the contributors to COM medio-lateral acceleration during 90° pre-planned turns about the inside limb (spin) and straight walking in typically developing children. Simulations of straight walking and turning gait based on experimental motion data were implemented in OpenSim. The contributors to COM global medio-lateral acceleration during the approach (outside limb) and turn (inside limb) stance phase were quantified via an induced acceleration analysis. Changes in medio-lateral COM acceleration occurred during both turning phases, compared to straight walking (p<0.001). During the approach, outside limb plantarflexors (soleus and medial gastrocnemius) contribution to lateral (away from the turn side) COM acceleration was reduced (p<0.001), whereas during the turn, inside limb plantarflexors (soleus and gastrocnemii) contribution to lateral acceleration (towards the turn side) increased (p≤0.013) and abductor (gluteus medius and minimus) contribution medially decreased (p<0.001), compared to straight walking, together helping accelerate the COM towards the new walking direction. Knowledge of the changes in muscle contributions required to modulate the COM position during turning improves our understanding of the control mechanisms of gait and may be used clinically to guide the management of gait disorders in populations with restricted gait ability. PMID:26555714

  6. The role of the electron convection term for the parallel electric field and electron acceleration in MHD simulations

    SciTech Connect

    Matsuda, K.; Terada, N.; Katoh, Y.; Misawa, H.

    2011-08-15

    There has been a great concern about the origin of the parallel electric field in the frame of fluid equations in the auroral acceleration region. This paper proposes a new method to simulate magnetohydrodynamic (MHD) equations that include the electron convection term and shows its efficiency with simulation results in one dimension. We apply a third-order semi-discrete central scheme to investigate the characteristics of the electron convection term including its nonlinearity. At a steady state discontinuity, the sum of the ion and electron convection terms balances with the ion pressure gradient. We find that the electron convection term works like the gradient of the negative pressure and reduces the ion sound speed or amplifies the sound mode when parallel current flows. The electron convection term enables us to describe a situation in which a parallel electric field and parallel electron acceleration coexist, which is impossible for ideal or resistive MHD.

  7. ICME Shock Accelerated Seps Transport in 3-D Heliospheric Magnetic Fields: Simulations and Multi Spacecraft Observations in Solar Cycle 24

    NASA Astrophysics Data System (ADS)

    Qin, G.; Wang, Y.

    2014-12-01

    In solar cycle 24, solar energetic particles (SEPs) are measured by multi-spacecraft at different locations, e.g., STEREO A and B, and ACE. Interplanetary observers located on the opposite sides of the Sun may or may not be connected to the ICME shock by the IMF. And the connected observers may also be connected to different parts of an ICME. In this work, we will numerically solve the Fokker-Planck transport equation to get the fluxes of SEPs accelerated by ICME shocks. In addition, we will compare our simulation results with SEP observations of different spacecraft, e.g., STEREO A and B, and ACE, during solar cycle 24. The simulation and data analysis together can improve our understanding of SEPs acceleration and transport mechanism.

  8. Monte Carlo simulation for Neptun 10 PC medical linear accelerator and calculations of output factor for electron beam

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad

    2012-01-01

    Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010

  9. Bacterial cells enhance laser driven ion acceleration

    PubMed Central

    Dalui, Malay; Kundu, M.; Trivikram, T. Madhu; Rajeev, R.; Ray, Krishanu; Krishnamurthy, M.

    2014-01-01

    Intense laser produced plasmas generate hot electrons which in turn leads to ion acceleration. Ability to generate faster ions or hotter electrons using the same laser parameters is one of the main outstanding paradigms in the intense laser-plasma physics. Here, we present a simple, albeit, unconventional target that succeeds in generating 700 keV carbon ions where conventional targets for the same laser parameters generate at most 40 keV. A few layers of micron sized bacteria coating on a polished surface increases the laser energy coupling and generates a hotter plasma which is more effective for the ion acceleration compared to the conventional polished targets. Particle-in-cell simulations show that micro-particle coated target are much more effective in ion acceleration as seen in the experiment. We envisage that the accelerated, high-energy carbon ions can be used as a source for multiple applications. PMID:25102948

  10. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    NASA Astrophysics Data System (ADS)

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-01

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large-sized system.

  11. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    SciTech Connect

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  12. Estimation of neutron production from accelerator head assembly of 15 MV medical LINAC using FLUKA simulations

    NASA Astrophysics Data System (ADS)

    Patil, B. J.; Chavan, S. T.; Pethe, S. N.; Krishnan, R.; Bhoraskar, V. N.; Dhole, S. D.

    2011-12-01

    For the production of a clinical 15 MeV photon beam, the design of accelerator head assembly has been optimized using Monte Carlo based FLUKA code. The accelerator head assembly consists of e-γ target, flattening filter, primary collimator and an adjustable rectangular secondary collimator. The accelerators used for radiation therapy generate continuous energy gamma rays called Bremsstrahlung (BR) by impinging high energy electrons on high Z materials. The electron accelerators operating above 10 MeV can result in the production of neutrons, mainly due to photo nuclear reaction (γ, n) induced by high energy photons in the accelerator head materials. These neutrons contaminate the therapeutic beam and give a non-negligible contribution to patient dose. The gamma dose and neutron dose equivalent at the patient plane (SSD = 100 cm) were obtained at different field sizes of 0 × 0, 10 × 10, 20 × 20, 30 × 30 and 40 × 40 cm 2, respectively. The maximum neutron dose equivalent is observed near the central axis of 30 × 30 cm 2 field size. This is 0.71% of the central axis photon dose rate of 0.34 Gy/min at 1 μA electron beam current.

  13. Simulations of laser-wakefield acceleration with external electron-bunch injection for REGAE experiments at DESY

    SciTech Connect

    Grebenyuk, Julia; Mehrling, Timon; Tsung, Frank S.; Floettman, Klaus; Osterhoff, Jens

    2012-12-21

    We present particle-in-cell simulations for future laser-plasma wakefield experiments with external bunch injection at the REGAE accelerator facility at DESY, Hamburg, Germany. Two effects have been studied in detail: emittance evolution of electron bunches externally injected into a wake, and longitudinal bunch compression inside the wakefield. Results show significant transverse emittance growth during the injection process, if the electron bunch is not matched to its intrinsic betatron motion inside the wakefield. This might introduce the necessity to include beam-matching sections upstream of each plasma-accelerator section with fundamental implications on the design of staged laser wakefield accelerators. When externally injected at the zero-field crossing of the laser-driven wake, the electron bunch may undergo significant compression in longitudinal direction and be accelerated simultaneously due to the gradient in the acting force. The mechanism would allow for production of single high-energy, ultra-short (on the order of one femtosecond) bunches at REGAE. The optimal conditions for maximal bunch compression are discussed in the presented studies.

  14. Simulations of an acceleration scheme for producing high intensity and low emittance antiproton beam for Fermilab collider operation

    SciTech Connect

    Wu, Vincent; Bhat, C.M.; MacLachlan, J.A.; /Fermilab

    2005-05-01

    During Fermilab collider operation, the Main Injector (MI) provides high intensity and low emittance proton and antiproton beams for the Tevatron. The present coalescing scheme for antiprotons in the Main Injector yields about a factor of two increase in the longitudinal emittance and a factor of 5% to 20% decrease in intensity before injection to the Tevatron. In order to maximize the integrated luminosity delivered to the collider experiments, it is important to minimize the emittance growth and maximize the intensity of the MI beam. To this end, a new scheme using a combination of 2.5 MHz and 53 MHz accelerations has been developed and tested. This paper describes the full simulation of the new acceleration scheme, taking account of space charge, 2.5 MHz and 53 MHz beam loading, and the effect of residual 53 MHz rf voltage during 2.5 MHz acceleration and rf manipulations. The simulations show the longitudinal emittance growth at the 10% level with no beam loss. The experimental test of the new scheme is reported in another PAC05 paper.

  15. PIC simulations of reforming perpendicular shocks- implications for ion acceleration at SNRs and the heliospheric termination shock

    NASA Astrophysics Data System (ADS)

    Lee, R. E.; Chapman, S. C.; Dendy, R. O.

    2004-12-01

    Recent particle-in-cell (PIC) simulations have revealed time-dependent shock solutions for parameters relevant to astrophysical and heliospheric shocks [e.g. 1,2,3]. These solutions are characterised by a shock which cyclically reforms on the spatio-temporal scales of the incoming protons. Whether a shock solution is stationary or reforming depends not only upon the model adopted for electron dynamics, but also on the plasma parameters, notably the upstream beta. For the heliospheric termination shock, these parameters are not well determined: some estimates suggest that the termination shock may be in a parameter regime such that it is time-dependent. It has been pointed out [3] that this may terminate some acceleration processes, for example shock surfing, which have been proposed for time-stationary shock solutions. The introduction of time-dependent electromagnetic fields intrinsic to the shock does however introduce the possibility of new mechanisms for the acceleration of protons. We will discuss the prospects for local ion acceleration at reforming quasiperpendicular shocks, in the presence of pickup ions as seen in selfconsistent PIC simulations with parameters relevant to both SNRs and the heliospheric termination shock. [1] Shimada, N., and M. Hoshino, Astrophys. J, 543, L67, 2000. [2] Schmitz, H., S.C. Chapman and R.O. Dendy, Astrophys. J, 570, 637, 2002 [3] Scholer, M., I. Shinohara and S. Matsukiyo, J. Geophys. Res., 108, 1014, 2003

  16. INJECTOR PARTICLE SIMULATION AND BEAM TRANSPORT IN A COMPACT LINEAR PROTON ACCELERATOR

    SciTech Connect

    Blackfield, D T; Chen, Y J; Harris, J; Nelson, S; Paul, A; Poole, B

    2007-06-18

    A compact Dielectric Wall Accelerator (DWA), with field gradient up to 100 MW/m is being developed to accelerate proton bunches for use in cancer therapy treatment. The injector must create a proton pulse up to several hundred picoseconds, which is then shaped and accelerated with energies up to 250 MeV. The Particle-In-Cell (PIC) code LSP is used to model several aspects of this design. First, we use LSP to obtain the voltage waveform in the A-K gap that will produce a proton bunch with the requisite charge. We then model pulse compression and shaping in the section between the A-K gap and the DWA. We finally use LSP to model the beam transport through the DWA.

  17. Paradigms Past and Future.

    ERIC Educational Resources Information Center

    Oates, Maureen

    1980-01-01

    Evaluates past paradigms (conceptual frameworks) such as the belief in the unlimited resources of the earth for humanity's particular benefit and the paradigm of the infallibility of technology. Illustrates how we are generally moving toward the new paradigm that small and simple is not only beautiful, but also more efficient, reliable, practical,…

  18. The Generative Paradigm.

    ERIC Educational Resources Information Center

    Loynes, Chris

    2002-01-01

    The "algorithmic" model of outdoor experiential learning is based in military tradition and characterized by questionable scientific rationale, production line metaphor, and the notion of learning as marketable commodity. Alternatives are the moral paradigm; the ecological paradigm "friluftsliv"; and the emerging "generative" paradigm, which…

  19. Studies of Multipactor in Dielectric-Loaded Accelerator Structures: Comparison of Simulation Results with Experimental Data

    SciTech Connect

    Sinitsyn, Oleksandr; Nusinovich, Gregory; Antonsen, Thomas Jr.

    2010-11-04

    In this paper new results of numerical studies of multipactor in dielectric-loaded accelerator structures are presented. The results are compared with experimental data obtained during recent studies of such structures performed by Argonne National Laboratory, the Naval Research Laboratory, SLAC National Accelerator Laboratory and Euclid TechLabs, LLC. Good agreement between the theory and experiment was observed for the structures with larger inner diameter, however the structures with smaller inner diameter demonstrated a discrepancy between the two. Possible reasons for such discrepancy are discussed.

  20. Measurement of depth-dose of linear accelerator and simulation by use of Geant4 computer code

    PubMed Central

    Sardari, D.; Maleki, R.; Samavat, H.; Esmaeeli, A.

    2010-01-01

    Radiation therapy is an established method of cancer treatment. New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapy treatment plan. This study presents some results of a Geant4-based application for simulation of the absorbed dose distribution given by a medical linear accelerator (LINAC). The LINAC geometry is accurately described in the Monte Carlo code with use of the accelerator manufacturer's specifications. The capability of the software for evaluating the dose distribution has been verified by comparisons with measurements in a water phantom; the comparisons were performed for percentage depth dose (PDD) and profiles for various field sizes and depths, for a 6-MV electron beam. Experimental and calculated dose values were in good agreement both in PDD and in transverse sections of the water phantom. PMID:24376926

  1. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    SciTech Connect

    Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.

    2014-10-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy.

  2. The Time Dependent Propensity Function for Acceleration of Spatial Stochastic Simulation of Reaction-Diffusion Systems

    PubMed Central

    Wu, Sheng; Li, Hong; Petzold, Linda R.

    2015-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185

  3. Application of the reduction of scale range in a Lorentz boosted frame to the numerical simulation of particle acceleration devices.

    SciTech Connect

    Vay, J; Fawley, W M; Geddes, C G; Cormier-Michel, E; Grote, D P

    2009-05-05

    It has been shown that the ratio of longest to shortest space and time scales of a system of two or more components crossing at relativistic velocities is not invariant under Lorentz transformation. This implies the existence of a frame of reference minimizing an aggregate measure of the ratio of space and time scales. It was demonstrated that this translated into a reduction by orders of magnitude in computer simulation run times, using methods based on first principles (e.g., Particle-In-Cell), for particle acceleration devices and for problems such as: free electron laser, laser-plasma accelerator, and particle beams interacting with electron clouds. Since then, speed-ups ranging from 75 to more than four orders of magnitude have been reported for the simulation of either scaled or reduced models of the above-cited problems. In it was shown that to achieve full benefits of the calculation in a boosted frame, some of the standard numerical techniques needed to be revised. The theory behind the speed-up of numerical simulation in a boosted frame, latest developments of numerical methods, and example applications with new opportunities that they offer are all presented.

  4. Simulation of power flow in magnetically insulated convolutes for pulsed modular accelerators

    SciTech Connect

    Seidel, D. B.; Goplen, B. C.; VanDevender, J. P.

    1980-01-01

    Two distinct simulation approaches for magnetic insulation are developed which can be used to address the question of nonsimultaneity. First, a two-dimensional model for a two-module system is simulated using a fully electromagnetic, two-dimensional, time-dependent particle code. Next, a nonlinear equivalent circuit approach is used to compare with the direct simulation for the two module case. The latter approach is then extended to a more interesting three-dimensional geometry with several MITL modules.

  5. Seat cushion to provide realistic acceleration cues to aircraft simulator pilot

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R. (Inventor)

    1979-01-01

    Seat cushions, each including an air cell with a non-compressible surface, are disclosed. The apparatus are provided for initially controlling the air pressure in the air cells to allow the two main support areas of the simulator pilot to touch the non-compressible surface and thus begin to compress the flesh near these areas. During a simulated flight the apparatus control the air pressure in the cells to simulate the events that occur in a seat cushion during actual flight.

  6. Magneto-hydrodynamics simulation study of deflagration mode in co-axial plasma accelerators

    SciTech Connect

    Sitaraman, Hariswaran; Raja, Laxminarayan L.

    2014-01-15

    Experimental studies by Poehlmann et al. [Phys. Plasmas 17(12), 123508 (2010)] on a coaxial electrode magnetohydrodynamic (MHD) plasma accelerator have revealed two modes of operation. A deflagration or stationary mode is observed for lower power settings, while higher input power leads to a detonation or snowplow mode. A numerical modeling study of a coaxial plasma accelerator using the non-ideal MHD equations is presented. The effect of plasma conductivity on the axial distribution of radial current is studied and found to agree well with experiments. Lower conductivities lead to the formation of a high current density, stationary region close to the inlet/breech, which is a characteristic of the deflagration mode, while a propagating current sheet like feature is observed at higher conductivities, similar to the detonation mode. Results confirm that plasma resistivity, which determines magnetic field diffusion effects, is fundamentally responsible for the two modes.

  7. Magneto-hydrodynamics simulation study of deflagration mode in co-axial plasma accelerators

    NASA Astrophysics Data System (ADS)

    Sitaraman, Hariswaran; Raja, Laxminarayan L.

    2014-01-01

    Experimental studies by Poehlmann et al. [Phys. Plasmas 17(12), 123508 (2010)] on a coaxial electrode magnetohydrodynamic (MHD) plasma accelerator have revealed two modes of operation. A deflagration or stationary mode is observed for lower power settings, while higher input power leads to a detonation or snowplow mode. A numerical modeling study of a coaxial plasma accelerator using the non-ideal MHD equations is presented. The effect of plasma conductivity on the axial distribution of radial current is studied and found to agree well with experiments. Lower conductivities lead to the formation of a high current density, stationary region close to the inlet/breech, which is a characteristic of the deflagration mode, while a propagating current sheet like feature is observed at higher conductivities, similar to the detonation mode. Results confirm that plasma resistivity, which determines magnetic field diffusion effects, is fundamentally responsible for the two modes.

  8. Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods

    SciTech Connect

    Luskin, Mitchell; James, Richard; Tadmor, Ellad

    2014-03-30

    This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.

  9. Mechanisms and Simulation of accelerated shrinkage of continental glaciers: a case study of Urumqi Glacier No. 1 Eastern Tianshan, Central Asia

    NASA Astrophysics Data System (ADS)

    Li, Zhongqin; Ren, Jiawen; Li, Huilin; Wang, Puyu; Wang, Feiteng

    2016-04-01

    Similar to most mountain glaciers in the world, Urumqi Glacier No. 1 (UG1), the best observed glacier in China with continued glaciological and climatological monitoring records of longer than 50 years has experienced an accelerated recession during the past several decades. The purpose of this study is to investigate the acceleration of recession. By taking UG1 as an example, we analyze the generic mechanisms of acceleration of shrinkage of continental mountain glaciers. The results indicate that the acceleration of mass loss of UG1 commenced first in 1985 and second in 1996 and that the latter was more vigorous. The air temperature rises during melting season, the ice temperature augment of the glacier and the albedo reduction on the glacier surface are considered responsible for the accelerated recession. In addition, the simulations of the accelerated shrinkage of UG1 are introduced.

  10. Modeling of 10 GeV-1 TeV laser-plasma accelerators using Lorentz boosted simulations

    SciTech Connect

    Vay, J.-L.; Geddes, C. G. R.; Esarey, E.; Schroeder, C. B.; Leemans, W. P.; Cormier-Michel, E.; Grote, D. P.

    2011-12-15

    Modeling of laser-plasma wakefield accelerators in an optimal frame of reference [J.-L. Vay, Phys. Rev. Lett. 98, 130405 (2007)] allows direct and efficient full-scale modeling of deeply depleted and beam loaded laser-plasma stages of 10 GeV-1 TeV (parameters not computationally accessible otherwise). This verifies the scaling of plasma accelerators to very high energies and accurately models the laser evolution and the accelerated electron beam transverse dynamics and energy spread. Over 4, 5, and 6 orders of magnitude speedup is achieved for the modeling of 10 GeV, 100 GeV, and 1 TeV class stages, respectively. Agreement at the percentage level is demonstrated between simulations using different frames of reference for a 0.1 GeV class stage. Obtaining these speedups and levels of accuracy was permitted by solutions for handling data input (in particular, particle and laser beams injection) and output in a relativistically boosted frame of reference, as well as mitigation of a high-frequency instability that otherwise limits effectiveness.

  11. Topical pimecrolimus and tacrolimus do not accelerate photocarcinogenesis in hairless mice after UVA or simulated solar radiation.

    PubMed

    Lerche, Catharina M; Philipsen, Peter A; Poulsen, Thomas; Wulf, Hans Christian

    2009-03-01

    Pimecrolimus and tacrolimus are topical calcineurin inhibitors developed specifically for the treatment of atopic eczema. Experience with long-term use of topical calcineurin inhibitors is limited and the risk of rare but serious adverse events remains a concern. We have previously demonstrated the absence of carcinogenic effect of tacrolimus alone and in combination with simulated solar radiation (SSR) on hairless mice. The aim of this study is to determine whether pimecrolimus accelerates photocarcinogenesis in combination with SSR or pimecrolimus and tacrolimus accelerate photocarcinogenesis in combination with UVA. We used 11 groups of 25 hairless female C3.Cg/TifBomTac immunocompetent mice (n = 275). Pimecrolimus cream or tacrolimus ointment was applied on their dorsal skin three times weekly followed by SSR (2, 4, or 6 standard erythema doses, SED) or UVA (25 J/cm(2)) 3-4 h later. This was done up to 365 days in the SSR-treated groups and up to 500 days in the UVA-treated groups. Pimecrolimus did not accelerate the time for development of the first, second or third tumor in any of the groups. Median time to the first tumor was 240 days for the control-2SED group compared with pimecrolimus-2SED group (233 days), control-4SED group (156 days) compared with pimecrolimus-4SED group (163 days) and control-6SED group (162 days) compared with pimecrolimus-6SED group (170 days). Only one mouse in each of the three UVA groups developed a tumor. We conclude that pimecrolimus in combination with SSR and both pimecrolimus and tacrolimus in combination with UVA do not accelerate photocarcinogenesis in hairless mice. PMID:19183401

  12. SIMULATIONS OF PARTICLE ACCELERATION BEYOND THE CLASSICAL SYNCHROTRON BURNOFF LIMIT IN MAGNETIC RECONNECTION: AN EXPLANATION OF THE CRAB FLARES

    SciTech Connect

    Cerutti, B.; Werner, G. R.; Uzdensky, D. A.; Begelman, M. C. E-mail: greg.werner@colorado.edu E-mail: mitch@jila.colorado.edu

    2013-06-20

    It is generally accepted that astrophysical sources cannot emit synchrotron radiation above 160 MeV in their rest frame. This limit is given by the balance between the accelerating electric force and the radiation reaction force acting on the electrons. The discovery of synchrotron gamma-ray flares in the Crab Nebula, well above this limit, challenges this classical picture of particle acceleration. To overcome this limit, particles must accelerate in a region of high electric field and low magnetic field. This is possible only with a non-ideal magnetohydrodynamic process, like magnetic reconnection. We present the first numerical evidence of particle acceleration beyond the synchrotron burnoff limit, using a set of two-dimensional particle-in-cell simulations of ultra-relativistic pair plasma reconnection. We use a new code, Zeltron, that includes self-consistently the radiation reaction force in the equation of motion of the particles. We demonstrate that the most energetic particles move back and forth across the reconnection layer, following relativistic Speiser orbits. These particles then radiate >160 MeV synchrotron radiation rapidly, within a fraction of a full gyration, after they exit the layer. Our analysis shows that the high-energy synchrotron flux is highly variable in time because of the strong anisotropy and inhomogeneity of the energetic particles. We discover a robust positive correlation between the flux and the cut-off energy of the emitted radiation, mimicking the effect of relativistic Doppler amplification. A strong guide field quenches the emission of >160 MeV synchrotron radiation. Our results are consistent with the observed properties of the Crab flares, supporting the reconnection scenario.

  13. Simulations of Particle Acceleration beyond the Classical Synchrotron Burnoff Limit in Magnetic Reconnection: An Explanation of the Crab Flares

    NASA Astrophysics Data System (ADS)

    Cerutti, B.; Werner, G. R.; Uzdensky, D. A.; Begelman, M. C.

    2013-06-01

    It is generally accepted that astrophysical sources cannot emit synchrotron radiation above 160 MeV in their rest frame. This limit is given by the balance between the accelerating electric force and the radiation reaction force acting on the electrons. The discovery of synchrotron gamma-ray flares in the Crab Nebula, well above this limit, challenges this classical picture of particle acceleration. To overcome this limit, particles must accelerate in a region of high electric field and low magnetic field. This is possible only with a non-ideal magnetohydrodynamic process, like magnetic reconnection. We present the first numerical evidence of particle acceleration beyond the synchrotron burnoff limit, using a set of two-dimensional particle-in-cell simulations of ultra-relativistic pair plasma reconnection. We use a new code, Zeltron, that includes self-consistently the radiation reaction force in the equation of motion of the particles. We demonstrate that the most energetic particles move back and forth across the reconnection layer, following relativistic Speiser orbits. These particles then radiate >160 MeV synchrotron radiation rapidly, within a fraction of a full gyration, after they exit the layer. Our analysis shows that the high-energy synchrotron flux is highly variable in time because of the strong anisotropy and inhomogeneity of the energetic particles. We discover a robust positive correlation between the flux and the cut-off energy of the emitted radiation, mimicking the effect of relativistic Doppler amplification. A strong guide field quenches the emission of >160 MeV synchrotron radiation. Our results are consistent with the observed properties of the Crab flares, supporting the reconnection scenario.

  14. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE

    NASA Astrophysics Data System (ADS)

    Sempau, J.; Sánchez-Reyes, A.; Salvat, F.; Oulad ben Tahar, H.; Jiang, S. B.; Fernández-Varea, J. M.

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the `latent' variance in the phase-space file, are discussed in detail.

  15. Accelerating Molecular Dynamics Simulations to Investigate Shock Response at the Mesoscales

    NASA Astrophysics Data System (ADS)

    Dongare, Avinash; Agarwal, Garvit; Valisetty, Ramakrishna; Namburu, Raju; Rajendran, Arunachalam

    The capability of large-scale molecular dynamics (MD) simulations to model dynamic response of materials is limited to system sizes at the nanoscales and the nanosecond timescales. A new method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of MD simulations to the mesoscales. The QCGD method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and retaining the energetics of these atoms as would be predicted in MD simulations. The QCGD method allows the modeling of larger size systems and larger time-steps for simulations and thus is able to extend the capabilities of MD simulations to model materials behavior at mesoscales. The success of the QCGD method is demonstrated by reproducing the shock propagation and failure behavior of single crystal and nanocrystalline Al microstructures as predicted using MD simulations and also modeling the shock response and failure behavior of Al microstructures at the micron length scales. The scaling relationships, the hugoniot behavior, and the predicted spall strengths using the MD and the QCGD simulations will be presented. This work is sponsored by the US Army Research Office under Contract# W911NF-14-1-0257.

  16. Enabling Lorentz boosted frame particle-in-cell simulations of laser wakefield acceleration in quasi-3D geometry

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng; Xu, Xinlu; Davidson, Asher; Tableman, Adam; Dalichaouch, Thamine; Li, Fei; Meyers, Michael D.; An, Weiming; Tsung, Frank S.; Decyk, Viktor K.; Fiuza, Frederico; Vieira, Jorge; Fonseca, Ricardo A.; Lu, Wei; Silva, Luis O.; Mori, Warren B.

    2016-07-01

    When modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) algorithm in a Lorentz boosted frame, the plasma is drifting relativistically at βb c towards the laser, which can lead to a computational speedup of ∼ γb2 = (1 - βb2)-1. Meanwhile, when LWFA is modeled in the quasi-3D geometry in which the electromagnetic fields and current are decomposed into a limited number of azimuthal harmonics, speedups are achieved by modeling three dimensional (3D) problems with the computational loads on the order of two dimensional r - z simulations. Here, we describe a method to combine the speedups from the Lorentz boosted frame and quasi-3D algorithms. The key to the combination is the use of a hybrid Yee-FFT solver in the quasi-3D geometry that significantly mitigates the Numerical Cerenkov Instability (NCI) which inevitably arises in a Lorentz boosted frame due to the unphysical coupling of Langmuir modes and EM modes of the relativistically drifting plasma in these simulations. In addition, based on the space-time distribution of the LWFA data in the lab and boosted frame, we propose to use a moving window to follow the drifting plasma, instead of following the laser driver as is done in the LWFA lab frame simulations, in order to further reduce the computational loads. We describe the details of how the NCI is mitigated for the quasi-3D geometry, the setups for simulations which combine the Lorentz boosted frame, quasi-3D geometry, and the use of a moving window, and compare the results from these simulations against their corresponding lab frame cases. Good agreement is obtained among these sample simulations, particularly when there is no self-trapping, which demonstrates it is possible to combine the Lorentz boosted frame and the quasi-3D algorithms when modeling LWFA. We also discuss the preliminary speedups achieved in these sample simulations.

  17. GPU accelerated Monte-Carlo simulation of SEM images for metrology

    NASA Astrophysics Data System (ADS)

    Verduin, T.; Lokhorst, S. R.; Hagen, C. W.

    2016-03-01

    In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable

  18. Cavity control system advanced modeling and simulations for TESLA linear accelerator and free electron laser

    NASA Astrophysics Data System (ADS)

    Czarski, Tomasz; Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Simrock, Stefan

    2004-07-01

    The cavity control system for the TESLA -- TeV-Energy Superconducting Linear Accelerator project is initially introduced. The elementary analysis of the cavity resonator on RF (radio frequency) level and low level frequency with signal and power considerations is presented. For the field vector detection the digital signal processing is proposed. The electromechanical model concerning Lorentz force detuning is applied for analyzing the basic features of the system performance. For multiple cavities driven by one klystron the field vector sum control is considered. Simulink model implementation is developed to explore the feedback and feed-forward system operation and some experimental results for signals and power considerations are presented.

  19. Simulation and steering in the intertank matching section of the ground test accelerator

    SciTech Connect

    Yuan, V.W.; Bolme, G.O.; Johnson, K.F.; Mottershead, C.T.; Sander, O.R.; Smith, M.T.; Erickson, J.L.

    1994-10-01

    The Intertank Matching Section (IMS) of the Ground Test Accelerator (GTA) is a short (36 cm) beamline designed to match the Radio Frequency Quadrupole (RFQ) exit beam into the first Drift Tube LINAC (DTL) tank. The IMS contains two steering quadrupoles (SMQs) and four variable-field focussing quads (VFQs). The SMQs are fixed strength permanent magnet quadrupoles on mechanical actuators capable of transverse movement for the purpose of steering the beam. The upstream and downstream steering quadrupoles are labelled SMQ1 and SMQ4 respectively. Also contained in the IMS are two RF cavities for longitudinal matching.

  20. Simulation and steering in the Intertank matching section of the ground test accelerator

    NASA Astrophysics Data System (ADS)

    Yuan, V. W.; Bolme, G. O.; Erickson, J. L.; Johnson, K. F.; Mottershead, C. T.; Sander, O. R.; Smith, M. T.

    1995-05-01

    The Intertank Matching Section (IMS) of the Ground Test Accelerator (GTA) is a short (36 cm) beamline designed to match the Radio Frequency Quadrupole (RFQ) exit beam into the first Drift Tube LINAC (DTL) tank. The IMS contains two steering quadrupoles (SMQS) and four variable-field focusing quads (VFQs). The SMQs are fixed strength permanent magnet quadrupoles on mechanical actuators capable of transverse movement for the purpose of steerng the beam. Also contained in the IMS are two RF cavities for longitudinal matching. A comparison of measured to calculated steering coefficients has been made for data aken in 3 different tunes of the IMS transport line. (AIP)

  1. Test simulation of neutron damage to electronic components using accelerator facilities

    NASA Astrophysics Data System (ADS)

    King, D. B.; Fleming, R. M.; Bielejec, E. S.; McDonald, J. K.; Vizkelethy, G.

    2015-12-01

    The purpose of this work is to demonstrate equivalent bipolar transistor damage response to neutrons and silicon ions. We report on irradiation tests performed at the White Sands Missile Range Fast Burst Reactor, the Sandia National Laboratories (SNL) Annular Core Research Reactor, the SNL SPHINX accelerator, and the SNL Ion Beam Laboratory using commercial silicon npn bipolar junction transistors (BJTs) and III-V Npn heterojunction bipolar transistors (HBTs). Late time and early time gain metrics as well as defect spectra measurements are reported.

  2. Numerical Simulation of Laser-driven In-Tube Accelerator on Supersonic Condition

    SciTech Connect

    Kim, Sukyum; Jeung, In-Seuck; Choi, Jeong-Yeol

    2004-03-30

    Recently, several laser propulsion vehicles have been launched successfully. But these vehicles remained in a very low subsonic flight. Laser-driven In-Tube Accelerator (LITA) is developed as unique laser propulsion system at Tohoku University. In this paper, flow characteristics and momentum coupling coefficients are studied numerically in the supersonic condition with the same configuration of LITA. Because of the aerodynamic drag, the coupling coefficient could not get correctly especially at the low energy input. In this study, the coupling coefficient was calculated using the concept of the effective impulse.

  3. Numerical Simulation of Laser-driven In-Tube Accelerator on Supersonic Condition

    NASA Astrophysics Data System (ADS)

    Kim, Sukyum; Jeung, In-Seuck; Choi, Jeong-Yeol

    2004-03-01

    Recently, several laser propulsion vehicles have been launched successfully. But these vehicles remained in a very low subsonic flight. Laser-driven In-Tube Accelerator (LITA) is developed as unique laser propulsion system at Tohoku University. In this paper, flow characteristics and momentum coupling coefficients are studied numerically in the supersonic condition with the same configuration of LITA. Because of the aerodynamic drag, the coupling coefficient could not get correctly especially at the low energy input. In this study, the coupling coefficient was calculated using the concept of the effective impulse.

  4. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  5. Accelerating the Customer-Driven Microgrid Through Real-Time Digital Simulation

    SciTech Connect

    I. Leonard; T. Baldwin; M. Sloderbeck

    2009-07-01

    Comprehensive design and testing of realistic customer-driven microgrids requires a high performance simulation platform capable of incorporating power system and control models with external hardware systems. Traditional non real-time simulation is unable to fully capture the level of detail necessary to expose real-world implementation issues. With a real-time digital simulator as its foundation, a high-fidelity simulation environment that includes a robust electrical power system model, advanced control architecture, and a highly adaptable communication network is introduced. Hardware-in-the-loop implementation approaches for the hardware-based control and communication systems are included. An overview of the existing power system model and its suitability for investigation of autonomous island formation within the microgrid is additionally presented. Further test plans are also documented.

  6. Reference field specification and preliminary beam selection strategy for accelerator-based GCR simulation.

    PubMed

    Slaba, Tony C; Blattnig, Steve R; Norbury, John W; Rusek, Adam; La Tessa, Chiara

    2016-02-01

    The galactic cosmic ray (GCR) simulator at the NASA Space Radiation Laboratory (NSRL) is intended to deliver the broad spectrum of particles and energies encountered in deep space to biological targets in a controlled laboratory setting. In this work, certain aspects of simulating the GCR environment in the laboratory are discussed. Reference field specification and beam selection strategies at NSRL are the main focus, but the analysis presented herein may be modified for other facilities and possible biological considerations. First, comparisons are made between direct simulation of the external, free space GCR field and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at NSRL limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, variation in the induced tissue field associated with shielding configuration and solar activity is addressed. It is found that the observed variation is likely within the uncertainty associated with representing any GCR reference field with discrete ion beams in the laboratory, given current facility constraints. A single reference field for deep space missions is subsequently identified. Third, a preliminary approach for selecting beams at NSRL to simulate the designated reference field is presented. This approach is not a final design for the GCR simulator, but rather a single step within a broader design strategy. It is shown that the beam selection methodology is tied directly to the reference environment, allows facility constraints to be incorporated, and may be adjusted to account for additional constraints imposed by biological or animal care considerations. The major biology questions are not addressed herein but are discussed in a companion paper published in the present issue of this journal. Drawbacks of the proposed methodology are discussed

  7. Reference field specification and preliminary beam selection strategy for accelerator-based GCR simulation

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Norbury, John W.; Rusek, Adam; La Tessa, Chiara

    2016-02-01

    The galactic cosmic ray (GCR) simulator at the NASA Space Radiation Laboratory (NSRL) is intended to deliver the broad spectrum of particles and energies encountered in deep space to biological targets in a controlled laboratory setting. In this work, certain aspects of simulating the GCR environment in the laboratory are discussed. Reference field specification and beam selection strategies at NSRL are the main focus, but the analysis presented herein may be modified for other facilities and possible biological considerations. First, comparisons are made between direct simulation of the external, free space GCR field and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at NSRL limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, variation in the induced tissue field associated with shielding configuration and solar activity is addressed. It is found that the observed variation is likely within the uncertainty associated with representing any GCR reference field with discrete ion beams in the laboratory, given current facility constraints. A single reference field for deep space missions is subsequently identified. Third, a preliminary approach for selecting beams at NSRL to simulate the designated reference field is presented. This approach is not a final design for the GCR simulator, but rather a single step within a broader design strategy. It is shown that the beam selection methodology is tied directly to the reference environment, allows facility constraints to be incorporated, and may be adjusted to account for additional constraints imposed by biological or animal care considerations. The major biology questions are not addressed herein but are discussed in a companion paper published in the present issue of this journal. Drawbacks of the proposed methodology are discussed

  8. Acceleration of plasma flows in the closed magnetic fields: Simulation and analysis

    SciTech Connect

    Mahajan, Swadesh M.; Shatashvili, Nana L.; Mikeladze, Solomon V.; Sigua, Ketevan I.

    2006-06-15

    Within the framework of a two-fluid description, possible pathways for the generation of fast flows (dynamical as well as steady) in the closed magnetic fields are established. It is shown that a primary plasma flow (locally sub-Alfvenic) is accelerated while interacting with ambient arcade-like closed field structures. The time scale for creating reasonably fast flows (> or approx. 100 km/s) is dictated by the initial ion skin depth, while the amplification of the flow depends on local plasma {beta}. It is shown that distances over which the flows become 'fast' are {approx}0.01R{sub 0} from the interaction surface (R{sub 0} being a characteristic length of the system); later, the fast flow localizes (with dimensions < or approx. 0.05R{sub 0}) in the upper central region of the original arcade. For fixed initial temperature, the final speed (> or approx. 500 km/s) of the accelerated flow and the modification of the field structure are independent of the time duration (lifetime) of the initial flow. In the presence of dissipation, these flows are likely to play a fundamental role in the heating of the finely structured stellar atmospheres; their relevance to the solar wind is also obvious.

  9. GPU-accelerated Classical Trajectory Calculation Direct Simulation Monte Carlo applied to shock waves

    NASA Astrophysics Data System (ADS)

    Norman, Paul; Valentini, Paolo; Schwartzentruber, Thomas

    2013-08-01

    In this work we outline a Classical Trajectory Calculation Direct Simulation Monte Carlo (CTC-DSMC) implementation that uses the no-time-counter scheme with a cross-section determined by the interatomic potential energy surface (PES). CTC-DSMC solutions for translational and rotational relaxation in one-dimensional shock waves are compared directly to pure Molecular Dynamics simulations employing an identical PES, where exact agreement is demonstrated for all cases. For the flows considered, long-lived collisions occur within the simulations and their implications for multi-body collisions as well as algorithm implications for the CTC-DSMC method are discussed. A parallelization technique for CTC-DSMC simulations using a heterogeneous multicore CPU/GPU system is demonstrated. Our approach shows good scaling as long as a sufficiently large number of collisions are calculated simultaneously per GPU (˜100,000) at each DSMC iteration. We achieve a maximum speedup of 140× on a 4 GPU/CPU system vs. the performance on one CPU core in serial for a diatomic nitrogen shock. The parallelization approach presented here significantly reduces the cost of CTC-DSMC simulations and has the potential to scale to large CPU/GPU clusters, which could enable future application to 3D flows in strong thermochemical nonequilibrium.

  10. Computational acceleration of orbital neutral sensor ionizer simulation through phenomena separation

    NASA Astrophysics Data System (ADS)

    Font, Gabriel I.

    2016-07-01

    Simulation of orbital phenomena is often difficult because of the non-continuum nature of the flow, which forces the use of particle methods, and the disparate time scales, which make long run times necessary. In this work, the computational work load has been reduced by taking advantage of the low number of collisions between different species. This allows each population of particles to be brought into convergence separately using a time step size optimized for its particular motion. The converged populations are then brought together to simulate low probability phenomena, such as ionization or excitation, on much longer time scales. The result of this technique has the effect of reducing run times by a factor of 103-104. The technique was applied to the simulation of a low earth orbit neutral species sensor with an ionizing element. Comparison with laboratory experiments of ion impacts generated by electron flux shows very good agreement.

  11. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations

    NASA Astrophysics Data System (ADS)

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  12. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations.

    PubMed

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns. PMID:26976736

  13. Simulation and steering in the Intertank matching section of the ground test accelerator

    SciTech Connect

    Yuan, V.W.; Bolme, G.O.; Erickson, J.L.; Johnson, K.F.; Mottershead, C.T.; Sander, O.R.; Smith, M.T.

    1995-05-05

    The Intertank Matching Section (IMS) of the Ground Test Accelerator (GTA) is a short (36 cm) beamline designed to match the Radio Frequency Quadrupole (RFQ) exit beam into the first Drift Tube LINAC (DTL) tank. The IMS contains two steering quadrupoles (SMQS) and four variable-field focusing quads (VFQs). The SMQs are fixed strength permanent magnet quadrupoles on mechanical actuators capable of transverse movement for the purpose of steerng the beam. Also contained in the IMS are two RF cavities for longitudinal matching. A comparison of measured to calculated steering coefficients has been made for data aken in 3 different tunes of the IMS transport line. (AIP) {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  14. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  15. The application of front tracking to the simulation of shock refractions and shock accelerated interface mixing

    SciTech Connect

    Sharp, D.H.; Grove, J.W.; Yang, Y.; Boston, B.; Holmes, R.; Zhang, Q.; Glimm, J.

    1993-08-01

    The mixing behavior of two or more fluids plays an important role in a number of physical processes and technological applications. The authors consider two basic types of mechanical (i.e., non-diffusive) fluid mixing. If a heavy fluid is suspended above a lighter fluid in the presence of a gravitational field, small perturbations at the fluid interface will grow. This process is known as the Rayleigh-Taylor instability. One can visualize this instability in terms of bubbles of the light fluid rising into the heavy fluid, and fingers (spikes) of the heavy fluid falling into the light fluid. A similar process, called the Richtmyer-Meshkov instability occurs when an interface is accelerated by a shock wave. These instabilities have several common features. Indeed, Richtmyer`s approach to understanding the shock induced instability was to view that process as resulting from an acceleration of the two fluids by a strong gravitational field acting for a short time. Here, the authors report new results on the Rayleigh-Taylor and Richtmyer-Meshkov instabilities. Highlights include calculations of Richtmyer-Meshkov instabilities in curved geometries without grid orientation effects, improved agreement between computations and experiments in the case of Richtmyer-Meshkov instabilities at a plane interface, and a demonstration of an increase in the Rayleigh-Taylor mixing layer growth rate with increasing compressibility, along with a loss of universality of this growth rate. The principal computational tool used in obtaining these results was a code based on the front tracking method.

  16. Dark Energy: Anatomy of a Paradigm Shift in Cosmology

    NASA Astrophysics Data System (ADS)

    Hocutt, Hannah

    2016-03-01

    Science is defined by its ability to shift its paradigm on the basis of observation and data. Throughout history, the worldviews of the scientific community have been drastically changed to fit that which was scientifically determined to be fact. One of the latest paradigm shifts happened over the shape and fate of the universe. This research details the progression from the early paradigm of a decelerating expanding universe to the discovery of dark energy and the movement to the current paradigm of a universe that is not only expanding but is also accelerating. Advisor: Dr. Kristine Larsen.

  17. Accelerating molecular simulations of proteins using Bayesian inference on weak information.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Dill, Ken A

    2015-09-22

    Atomistic molecular dynamics (MD) simulations of protein molecules are too computationally expensive to predict most native structures from amino acid sequences. Here, we integrate "weak" external knowledge into folding simulations to predict protein structures, given their sequence. For example, we instruct the computer "to form a hydrophobic core," "to form good secondary structures," or "to seek a compact state." This kind of information has been too combinatoric, nonspecific, and vague to help guide MD simulations before. Within atomistic replica-exchange molecular dynamics (REMD), we develop a statistical mechanical framework, modeling using limited data with coarse physical insight(s) (MELD + CPI), for harnessing weak information. As a test, we apply MELD + CPI to predict the native structures of 20 small proteins. MELD + CPI samples to within less than 3.2 Å from native for all 20 and correctly chooses the native structures (<4 Å) for 15 of them, including ubiquitin, a millisecond folder. MELD + CPI is up to five orders of magnitude faster than brute-force MD, satisfies detailed balance, and should scale well to larger proteins. MELD + CPI may be useful where physics-based simulations are needed to study protein mechanisms and populations and where we have some heuristic or coarse physical knowledge about states of interest. PMID:26351667

  18. Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder

    SciTech Connect

    Morgan, B. E.; Greenough, J. A.

    2015-04-08

    Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the time at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.

  19. Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder

    DOE PAGESBeta

    Morgan, B. E.; Greenough, J. A.

    2015-04-08

    Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the timemore » at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.« less

  20. Hardware acceleration of a Monte Carlo simulation for photodynamic therapy [corrected] treatment planning.

    PubMed

    Lo, William Chun Yip; Redmond, Keith; Luu, Jason; Chow, Paul; Rose, Jonathan; Lilge, Lothar

    2009-01-01

    Monte Carlo (MC) simulations are being used extensively in the field of medical biophysics, particularly for modeling light propagation in tissues. The high computation time for MC limits its use to solving only the forward solutions for a given source geometry, emission profile, and optical interaction coefficients of the tissue. However, applications such as photodynamic therapy treatment planning or image reconstruction in diffuse optical tomography require solving the inverse problem given a desired dose distribution or absorber distribution, respectively. A faster means for performing MC simulations would enable the use of MC-based models for accomplishing such tasks. To explore this possibility, a digital hardware implementation of a MC simulation based on the Monte Carlo for Multi-Layered media (MCML) software was implemented on a development platform with multiple field-programmable gate arrays (FPGAs). The hardware performed the MC simulation on average 80 times faster and was 45 times more energy efficient than the MCML software executed on a 3-GHz Intel Xeon processor. The resulting isofluence lines closely matched those produced by MCML in software, diverging by only less than 0.1 mm for fluence levels as low as 0.00001 cm(-2) in a skin model. PMID:19256707

  1. Accelerating molecular simulations of proteins using Bayesian inference on weak information

    PubMed Central

    Perez, Alberto; MacCallum, Justin L.; Dill, Ken A.

    2015-01-01

    Atomistic molecular dynamics (MD) simulations of protein molecules are too computationally expensive to predict most native structures from amino acid sequences. Here, we integrate “weak” external knowledge into folding simulations to predict protein structures, given their sequence. For example, we instruct the computer “to form a hydrophobic core,” “to form good secondary structures,” or “to seek a compact state.” This kind of information has been too combinatoric, nonspecific, and vague to help guide MD simulations before. Within atomistic replica-exchange molecular dynamics (REMD), we develop a statistical mechanical framework, modeling using limited data with coarse physical insight(s) (MELD + CPI), for harnessing weak information. As a test, we apply MELD + CPI to predict the native structures of 20 small proteins. MELD + CPI samples to within less than 3.2 Å from native for all 20 and correctly chooses the native structures (<4 Å) for 15 of them, including ubiquitin, a millisecond folder. MELD + CPI is up to five orders of magnitude faster than brute-force MD, satisfies detailed balance, and should scale well to larger proteins. MELD + CPI may be useful where physics-based simulations are needed to study protein mechanisms and populations and where we have some heuristic or coarse physical knowledge about states of interest. PMID:26351667

  2. Organizational Paradigm Shifts.

    ERIC Educational Resources Information Center

    National Association of College and University Business Officers, Washington, DC.

    This collection of essays explores a new paradigm of higher education. The first essay, "Beyond Re-engineering: Changing the Organizational Paradigm" (L. Edwin Coate), suggests a model of quality process management and a structure for managing organizational change. "Thinking About Consortia" (Mary Jo Maydew) discusses cooperative effort and…

  3. The Investment Paradigm

    ERIC Educational Resources Information Center

    Perna, Mark C.

    2005-01-01

    Is marketing an expense or an investment? Most accountants will claim that marketing is an expense, and clearly that seems true when cutting the checks to fund these efforts. When it is done properly, marketing is the best investment. A key principle to Smart Marketing is the Investment Paradigm. The Investment Paradigm is understanding that every…

  4. An Integrative Paradigm

    ERIC Educational Resources Information Center

    Hammack, Phillip L.

    2005-01-01

    Through the application of life course theory to the study of sexual orientation, this paper specifies a new paradigm for research on human sexual orientation that seeks to reconcile divisions among biological, social science, and humanistic paradigms. Recognizing the historical, social, and cultural relativity of human development, this paradigm…

  5. Production-passage-time approximation: a new approximation method to accelerate the simulation process of enzymatic reactions.

    PubMed

    Kuwahara, Hiroyuki; Myers, Chris J

    2008-09-01

    Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency. PMID:18662102

  6. Modeling of 10 GeV-1 TeV laser-plasma accelerators using Lorentz boosted simulations

    SciTech Connect

    Vay, J. -L.; Geddes, C. G. R.; Esarey, E.; Schroeder, C. B.; Leemans, W. P.; Cormier-Michel, E.; Grote, D. P.

    2011-12-13

    We study modeling of laser-plasma wakefield accelerators in an optimal frame of reference [J.-L. Vay, Phys. Rev. Lett. 98, 130405 (2007)] that allows direct and efficient full-scale modeling of deeply depleted and beam loaded laser-plasma stages of 10 GeV-1 TeV (parameters not computationally accessible otherwise). This verifies the scaling of plasmaaccelerators to very high energies and accurately models the laser evolution and the accelerated electron beam transverse dynamics and energy spread. Over 4, 5, and 6 orders of magnitude speedup is achieved for the modeling of 10 GeV, 100 GeV, and 1 TeV class stages, respectively. Agreement at the percentage level is demonstrated between simulations using different frames of reference for a 0.1 GeV class stage. In addition, obtaining these speedups and levels of accuracy was permitted by solutions for handling data input (in particular, particle and laser beams injection) and output in a relativistically boosted frame of reference, as well as mitigation of a high-frequency instability that otherwise limits effectiveness.

  7. GPU-accelerated real-time IR smoke screen simulation and assessment of its obscuration

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Zhang, Jian-qi; Huang, Xi; Liu, De-lian

    2012-01-01

    With the growing demand for the Battlefield Environment Simulation (BES), IR smoke screen, which is computationally expensive and absolutely indispensable, should be modeled true to life and correct in its thermal radiation characteristics. This paper analyzes the features of an IR smoke screen, and represents an IR smoke screen model based on light extinction, particle dispersion and temperature attenuation, which is calculated by GPU and rendered to screen in real time. Thus a method considering both the real-life in profile and the real-time in efficiency is presented. Additionally, the comparison between the simulated results and the measured data is made to verify the correctness of the smoke screen's obscuration, which illustrates the effect of its interference feature in an infrared scene.

  8. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGESBeta

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  9. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  10. Effects of numerical methods on comparisons between experiments and simulations of shock-accelerated mixing.

    SciTech Connect

    Rider, William; Kamm, J. R.; Tomkins, C. D.; Zoldi, C. A.; Prestridge, K. P.; Marr-Lyon, M.; Rightley, P. M.; Benjamin, R. F.

    2002-01-01

    We consider the detailed structures of mixing flows for Richtmyer-Meshkov experiments of Prestridge et al. [PRE 00] and Tomkins et al. [TOM 01] and examine the most recent measurements from the experimental apparatus. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods. We compare experimental data with simulations for configurations of one and two diffuse cylinders of SF{sub 6} in air using integral measures as well as fractal analysis and continuous wavelet transforms. The details of the initial conditions have a significant effect on the computed results, especially in the case of the double cylinder. Additionally, these comparisons reveal sensitive dependence of the computed solution on the numerical method.

  11. Accelerated simulation of unfolding and refolding of a large single chain globular protein

    PubMed Central

    Seddon, Gavin M.; Bywater, Robert P.

    2012-01-01

    We have developed novel strategies for contracting simulation times in protein dynamics that enable us to study a complex protein with molecular weight in excess of 34 kDa. Starting from a crystal structure, we produce unfolded and then refolded states for the protein. We then compare these quantitatively using both established and new metrics for protein structure and quality checking. These include use of the programs Concoord and Darvols. Simulation of protein-folded structure well beyond the molten globule state and then recovery back to the folded state is itself new, and our results throw new light on the protein-folding process. We accomplish this using a novel cooling protocol developed for this work. PMID:22870389

  12. Electron acceleration at nearly perpendicular collisionless shocks. I - One-dimensional simulations without electron scale fluctuations

    NASA Technical Reports Server (NTRS)

    Krauss-Varban, D.; Burgess, D.; Wu, C. S.

    1989-01-01

    Under certain conditions electrons can be reflected and effectively energized at quasi-perpendicular shocks. This process is most prominent close to the point where the upstream magnetic field is tangent to the curved shock. A theoretical explanation of the underlying physical mechanism has been proposed which assumes conservation of magnetic moment and a static, simplified shock profile are performed. Test particle calculations of the electron reflection process in order to examine the results of the theoretical analysis without imposing these restrictive conditions. A one-dimensional hybrid simulation code generates the characteristic field variations across the shock. Special emphasis is placed on the spatial and temporal length scales involved in the mirroring process. The simulation results agree generally well with the predictions from adiabatic theory. The effects of the cross-shock potential and unsteadiness are quantified, and the influence of field fluctuations on the reflection process is discussed.

  13. A practical perspective on the implementation of hyperdynamics for accelerated simulation

    SciTech Connect

    Kim, Woo Kyun; Falk, Michael L.

    2014-01-28

    Consideration is given to several practical issues arising during the implementation of hyperdynamics, a methodology that extends the time scale of the conventional molecular dynamics simulation potentially by orders of magnitude. First, the methodology is reformulated in terms of the transition rate based on the buffer region approach (buffer rate), which can describe transitions in more general contexts than the transition state theory (TST). It will be shown that hyperdynamics can exactly preserve the buffer rate as well as the TST rate, which broadens the scope of the method. Next, the originally proposed scheme to compute the boost factor on-the-fly is reviewed and some alternative methods, one of which uses the umbrella sampling method, are presented. Finally, the methodology is validated in the context of a 1-dimensional example potential and a 3-dimensional simulation of the motion of an atomic force microscope tip moving along a surface.

  14. Accelerated simulation of unfolding and refolding of a large single chain globular protein.

    PubMed

    Seddon, Gavin M; Bywater, Robert P

    2012-07-01

    We have developed novel strategies for contracting simulation times in protein dynamics that enable us to study a complex protein with molecular weight in excess of 34 kDa. Starting from a crystal structure, we produce unfolded and then refolded states for the protein. We then compare these quantitatively using both established and new metrics for protein structure and quality checking. These include use of the programs Concoord and Darvols. Simulation of protein-folded structure well beyond the molten globule state and then recovery back to the folded state is itself new, and our results throw new light on the protein-folding process. We accomplish this using a novel cooling protocol developed for this work. PMID:22870389

  15. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  16. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  17. Accelerating the Computation of Detailed Chemical Reaction Kinetics for Simulating Combustion of Complex Fuels

    SciTech Connect

    Sankaran, R.; Grout, R.

    2012-01-01

    Combustion of hydrocarbon fuels has been a very challenging scientific and engineering problem due to the complexity of turbulent flows and hydrocarbon reaction kinetics. There is an urgent need to develop an efficient modeling capability to accurately predict the combustion of complex fuels. Detailed chemical kinetic models for the surrogates of fuels such as gasoline, diesel and JP-8 consist of thousands of chemical species and Arrhenius reaction steps. Oxygenated fuels such as bio-fuels and heavier hydrocarbons, such as from newer fossil fuel sources, are expected to have a much more complex chemistry requiring increasingly larger chemical kinetic models. Such models are beyond current computational capability, except for homogeneous or partially stirred reactor type calculations. The advent of highly parallel multi-core processors and graphical processing units (GPUs) promises a steep increase in computational performance in the coming years. This paper will present a software framework that translates the detailed chemical kinetic models to high-performance code targeted for GPU accelerators.

  18. Determining optimization of the initial parameters in Monte Carlo simulation for linear accelerator radiotherapy

    NASA Astrophysics Data System (ADS)

    Chang, Kwo-Ping; Wang, Zhi-Wei; Shiau, An-Cheng

    2014-02-01

    Monte Carlo (MC) method is a well known calculation algorithm which can accurately assess the dose distribution for radiotherapy. The present study investigated all the possible regions of the depth-dose or lateral profiles which may affect the fitting of the initial parameters (mean energy and the radial intensity (full width at half maximum, FWHM) of the incident electron). EGSnrc-based BEAMnrc codes were used to generate the phase space files (SSD=100 cm, FS=40×40 cm2) for the linac (linear accelerator, Varian 21EX, 6 MV photon mode) and EGSnrc-based DOSXYZnrc code was used to calculate the dose in the region of interest. Interpolation of depth dose curves of pre-set energies was proposed as a preliminary step for optimal energy fit. A good approach for determination of the optimal mean energy is the difference comparison of the PDD curves excluding buildup region, and using D(10) as a normalization method. For FWHM fitting, due to electron disequilibrium and the larger statistical uncertainty, using horn or/and penumbra regions will give inconsistent outcomes at various depths. Difference comparisons should be performed in the flat regions of the off-axis dose profiles at various depths to optimize the FWHM parameter.

  19. Accelerator-Based Biological Irradiation Facility Simulating Neutron Exposure from an Improvised Nuclear Device.

    PubMed

    Xu, Yanping; Randers-Pehrson, Gerhard; Turner, Helen C; Marino, Stephen A; Geard, Charles R; Brenner, David J; Garty, Guy

    2015-10-01

    We describe here an accelerator-based neutron irradiation facility, intended to expose blood or small animals to neutron fields mimicking those from an improvised nuclear device at relevant distances from the epicenter. Neutrons are generated by a mixed proton/deuteron beam on a thick beryllium target, generating a broad spectrum of neutron energies that match those estimated for the Hiroshima bomb at 1.5 km from ground zero. This spectrum, dominated by neutron energies between 0.2 and 9 MeV, is significantly different from the standard reactor fission spectrum, as the initial bomb spectrum changes when the neutrons are transported through air. The neutron and gamma dose rates were measured using a custom tissue-equivalent gas ionization chamber and a compensated Geiger-Mueller dosimeter, respectively. Neutron spectra were evaluated by unfolding measurements using a proton-recoil proportional counter and a liquid scintillator detector. As an illustration of the potential use of this facility we present micronucleus yields in single divided, cytokinesis-blocked human peripheral lymphocytes up to 1.5 Gy demonstrating 3- to 5-fold enhancement over equivalent X-ray doses. This facility is currently in routine use, irradiating both mice and human blood samples for evaluation of neutron-specific biodosimetry assays. Future studies will focus on dose reconstruction in realistic mixed neutron/photon fields. PMID:26414507

  20. Accelerating the Computation of Detailed Chemical Reaction Kinetics for Simulating Combustion of Complex Fuels

    SciTech Connect

    Grout, Ray W

    2012-01-01

    Combustion of hydrocarbon fuels has been a very challenging scientific and engineering problem due to the complexity of turbulent flows and hydrocarbon reaction kinetics. There is an urgent need to develop an efficient modeling capability to accurately predict the combustion of complex fuels. Detailed chemical kinetic models for the surrogates of fuels such as gasoline, diesel and JP-8 consist of thousands of chemical species and Arrhenius reaction steps. Oxygenated fuels such as bio-fuels and heavier hydrocarbons, such as from newer fossil fuel sources, are expected to have a much more complex chemistry requiring increasingly larger chemical kinetic models. Such models are beyond current computational capability, except for homogeneous or partially stirred reactor type calculations. The advent of highly parallel multi-core processors and graphical processing units (GPUs) promises a steep increase in computational performance in the coming years. This paper will present a software framework that translates the detailed chemical kinetic models to high- performance code targeted for GPU accelerators.

  1. Accelerating spectral-element simulations of seismic wave propagation using local time stepping

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.

    2013-12-01

    Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high

  2. 2D hydrodynamic simulations of a variable length gas target for density down-ramp injection of electrons into a laser wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Kononenko, O.; Lopes, N. C.; Cole, J. M.; Kamperidis, C.; Mangles, S. P. D.; Najmudin, Z.; Osterhoff, J.; Poder, K.; Rusby, D.; Symes, D. R.; Warwick, J.; Wood, J. C.; Palmer, C. A. J.

    2016-09-01

    In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.

  3. PHITS simulations of absorbed dose out-of-field and neutron energy spectra for ELEKTA SL25 medical linear accelerator

    NASA Astrophysics Data System (ADS)

    Puchalska, Monika; Sihver, Lembit

    2015-06-01

    Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.

  4. PHITS simulations of absorbed dose out-of-field and neutron energy spectra for ELEKTA SL25 medical linear accelerator.

    PubMed

    Puchalska, Monika; Sihver, Lembit

    2015-06-21

    Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature. PMID:26057186

  5. Simulation of monoenergetic electron generation via laser wakefield accelerators for 5-25 TW lasers

    SciTech Connect

    Tsung, F.S.; Lu, W.; Tzoufras, M.; Mori, W.B.; Joshi, C.; Vieira, J.M.; Silva, L.O.; Fonseca, R.A.

    2006-05-15

    In 2004, using a 3D particle-in-cell (PIC) model [F. S. Tsung et al., Phys. Rev. Lett. 93, 185004 (2004)], it was predicted that a 16.5 TW, 50 fs laser propagating through nearly 0.5 cm of 3x10{sup 18} cm{sup -3} preformed plasma channel would generate a monoenergetic bunch of electrons with a central energy of 240 MeV after 0.5 cm of propagation. In addition, electrons out to 840 MeV were seen if the laser propagated through 0.8 cm of the same plasma. The simulations showed that self-injection occurs after the laser intensity increases due to a combination of photon deceleration, group velocity dispersion, and self-focusing. The monoenergetic beam is produced because the injection process is clamped by beam loading and the rotation in phase space that results as the beam dephases. Nearly simultaneously [S. P. D. Mangles et al., Nature 431, 535 (2004); C. G. R. Geddes et al., ibid. 431, 538 (2004); J. Faure et al., ibid. 431, 541 (2004)] three experimental groups from around the world reported the generation of near nano-Coulomb of low emittance, monoenergetic electron beams using similar laser powers and pulse lengths as those reported in our simulations. Each of these experiments is modeled using the same 3D PIC code OSIRIS. The simulations indicate that although these experiments use a range of plasma parameters, density profiles, laser powers, and spot sizes; there are some commonalities to the mechanism for the generation of monoenergetic beams. Comments are given on how the energy and beam quality can be improved in the future.

  6. Simulation Study Using an Injection Phase-locked Magnetron as an Alternative Source for SRF Accelerators

    SciTech Connect

    Wang, Haipeng; Plawski, Tomasz E.; Rimmer, Robert A.

    2015-09-01

    As a drop-in replacement for the CEBAF CW klystron system, a 1497 MHz, CW-type high-efficiency magnetron using injection phase lock and amplitude variation is attractive. Amplitude control using magnetic field trimming and anode voltage modulation has been studied using analytical models and MATLAB/Simulink simulations. Since the 1497 MHz magnetron has not been built yet, previously measured characteristics of a 2.45GHz cooker magnetron are used as reference. The results of linear responses to the amplitude and phase control of a superconducting RF (SRF) cavity, and the expected overall benefit for the current CEBAF and future MEIC RF systems are presented in this paper.

  7. A GPU Accelerated Discontinuous Galerkin Conservative Level Set Method for Simulating Atomization

    NASA Astrophysics Data System (ADS)

    Jibben, Zechariah J.

    This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann, 2008) for effective use of processing power. Computation is executed in parallel utilizing both CPU and GPU architectures to make the method feasible at high order. Finally, a sparse data structure is implemented to take full advantage of parallelism on the GPU, where performance relies on well-managed memory operations. With solution variables projected into a kth order polynomial basis, a k + 1 order convergence rate is found for both advection and reinitialization tests using the method of manufactured solutions. Other standard test cases, such as Zalesak's disk and deformation of columns and spheres in periodic vortices are also performed, showing several orders of magnitude improvement over traditional WENO level set methods. These tests also show the impact of reinitialization, which often increases shape and volume errors as a result of level set scalar trapping by normal vectors calculated from the local level set field. Accelerating advection via GPU hardware is found to provide a 30x speedup factor comparing a 2.0GHz Intel Xeon E5-2620 CPU in serial vs. a Nvidia Tesla K20 GPU, with speedup factors increasing with polynomial degree until shared memory is filled. A similar algorithm is implemented for reinitialization, which relies on heavier use of shared and global memory and as a result fills them more quickly and produces smaller speedups of 18x.

  8. A coupled ordinates method for solution acceleration of rarefied gas dynamics simulations

    SciTech Connect

    Das, Shankhadeep; Mathur, Sanjay R.; Alexeenko, Alina; Murthy, Jayathi Y.

    2015-05-15

    Non-equilibrium rarefied flows are frequently encountered in a wide range of applications, including atmospheric re-entry vehicles, vacuum technology, and microscale devices. Rarefied flows at the microscale can be effectively modeled using the ellipsoidal statistical Bhatnagar–Gross–Krook (ESBGK) form of the Boltzmann kinetic equation. Numerical solutions of these equations are often based on the finite volume method (FVM) in physical space and the discrete ordinates method in velocity space. However, existing solvers use a sequential solution procedure wherein the velocity distribution functions are implicitly coupled in physical space, but are solved sequentially in velocity space. This leads to explicit coupling of the distribution function values in velocity space and slows down convergence in systems with low Knudsen numbers. Furthermore, this also makes it difficult to solve multiscale problems or problems in which there is a large range of Knudsen numbers. In this paper, we extend the coupled ordinates method (COMET), previously developed to study participating radiative heat transfer, to solve the ESBGK equations. In this method, at each cell in the physical domain, distribution function values for all velocity ordinates are solved simultaneously. This coupled solution is used as a relaxation sweep in a geometric multigrid method in the spatial domain. Enhancements to COMET to account for the non-linearity of the ESBGK equations, as well as the coupled implementation of boundary conditions, are presented. The methodology works well with arbitrary convex polyhedral meshes, and is shown to give significantly faster solutions than the conventional sequential solution procedure. Acceleration factors of 5–9 are obtained for low to moderate Knudsen numbers on single processor platforms.

  9. Renormalizing SMD: The Renormalization Approach and Its Use in Long Time Simulations and Accelerated PMF Calculations of Macromolecules

    PubMed Central

    Dryga, Anatoly; Warshel, Arieh

    2010-01-01

    Simulations of long time process in condensed phases in general and in biomolecules in particular, presents a major challenge that cannot be overcome at present by brute force molecular dynamics (MD) approaches. This work takes the renormalization method, intruded by us sometime ago, and establishes its reliability and potential in extending the time scale of molecular simulations. The validation involves a truncated gramicidin system in the gas phase that is small enough to allow very long explicit simulation and sufficiently complex to present the physics of realistic ion channels. The renormalization approach is found to be reliable and arguably presents the first approach that allows one to exploit the otherwise problematic steered molecular dynamics (SMD) treatments in quantitative and meaningful studies. It is established that we can reproduce the long time behavior of large systems by using Langevin dynamics (LD) simulations of a renormalized implicit model. This is done without spending the enormous time needed to obtain such trajectories in the explicit system. The present study also provides a promising advance in accelerated evaluation of free energy barriers. This is done by adjusting the effective potential in the implicit model to reproduce the same passage time as that obtained in the explicit model, under the influence of an external force. Here having a reasonable effective friction provides a way to extract the potential of mean force (PMF) without investing the time needed for regular PMF calculations. The renormalization approach, which is illustrated here in realistic calculations, is expected to provide a major help in studies of complex landscapes and in exploring long time dynamics of biomolecules. PMID:20836533

  10. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators of use in cancer therapy

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2010-01-01

    The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.

  11. Accelerating Ab Initio Path Integral Simulations via Imaginary Multiple-Timestepping.

    PubMed

    Cheng, Xiaolu; Herr, Jonathan D; Steele, Ryan P

    2016-04-12

    This work investigates the use of multiple-timestep schemes in imaginary time for computationally efficient ab initio equilibrium path integral simulations of quantum molecular motion. In the simplest formulation, only every n(th) path integral replica is computed at the target level of electronic structure theory, whereas the remaining low-level replicas still account for nuclear motion quantum effects with a more computationally economical theory. Motivated by recent developments for multiple-timestep techniques in real-time classical molecular dynamics, both 1-electron (atomic-orbital basis set) and 2-electron (electron correlation) truncations are shown to be effective. Structural distributions and thermodynamic averages are tested for representative analytic potentials and ab initio molecular examples. Target quantum chemistry methods include density functional theory and second-order Møller-Plesset perturbation theory, although any level of theory is formally amenable to this framework. For a standard two-level splitting, computational speedups of 1.6-4.0x are observed when using a 4-fold reduction in time slices; an 8-fold reduction is feasible in some cases. Multitiered options further reduce computational requirements and suggest that quantum mechanical motion could potentially be obtained at a cost not significantly different from the cost of classical simulations. PMID:26966920

  12. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  13. Accelerating Our Understanding of Supernova Explosion Mechanism via Simulations and Visualizations with GenASiS

    SciTech Connect

    Budiardja, R. D.; Cardall, Christian Y; Endeve, Eirik

    2015-01-01

    Core-collapse supernovae are among the most powerful explosions in the Universe, releasing about 1053 erg of energy on timescales of a few tens of seconds. These explosion events are also responsible for the production and dissemination of most of the heavy elements, making life as we know it possible. Yet exactly how they work is still unresolved. One reason for this is the sheer complexity and cost of a self-consistent, multi-physics, and multi-dimensional core-collapse supernova simulation, which is impractical, and often impossible, even on the largest supercomputers we have available today. To advance our understanding we instead must often use simplified models, teasing out the most important ingredients for successful explosions, while helping us to interpret results from higher fidelity multi-physics models. In this paper we investigate the role of instabilities in the core-collapse supernova environment. We present here simulation and visualization results produced by our code GenASiS.

  14. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  15. Paradigms for machine learning

    NASA Technical Reports Server (NTRS)

    Schlimmer, Jeffrey C.; Langley, Pat

    1991-01-01

    Five paradigms are described for machine learning: connectionist (neural network) methods, genetic algorithms and classifier systems, empirical methods for inducing rules and decision trees, analytic learning methods, and case-based approaches. Some dimensions are considered along with these paradigms vary in their approach to learning, and the basic methods are reviewed that are used within each framework, together with open research issues. It is argued that the similarities among the paradigms are more important than their differences, and that future work should attempt to bridge the existing boundaries. Finally, some recent developments in the field of machine learning are discussed, and their impact on both research and applications is examined.

  16. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  17. A scalable messaging system for accelerating discovery from large scale scientific simulations

    SciTech Connect

    Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan

    2012-01-01

    Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the

  18. Quasi-spherical direct drive fusion simulations for the Z machine and future accelerators.

    SciTech Connect

    VanDevender, J. Pace; McDaniel, Dillon Heirman; Roderick, Norman Frederick; Nash, Thomas J.

    2007-11-01

    We explored the potential of Quasi-Spherical Direct Drive (QSDD) to reduce the cost and risk of a future fusion driver for Inertial Confinement Fusion (ICF) and to produce megajoule thermonuclear yield on the renovated Z Machine with a pulse shortening Magnetically Insulated Current Amplifier (MICA). Analytic relationships for constant implosion velocity and constant pusher stability have been derived and show that the required current scales as the implosion time. Therefore, a MICA is necessary to drive QSDD capsules with hot-spot ignition on Z. We have optimized the LASNEX parameters for QSDD with realistic walls and mitigated many of the risks. Although the mix-degraded 1D yield is computed to be {approx}30 MJ on Z, unmitigated wall expansion under the > 100 gigabar pressure just before burn prevents ignition in the 2D simulations. A squeezer system of adjacent implosions may mitigate the wall expansion and permit the plasma to burn.

  19. Particle acceleration due to shocks in the interplanetary field: High time resolution data and simulation results

    NASA Technical Reports Server (NTRS)

    Kessel, R. L.; Armstrong, T. P.; Nuber, R.; Bandle, J.

    1985-01-01

    Data were examined from two experiments aboard the Explorer 50 (IMP 8) spacecraft. The Johns Hopkins University/Applied Lab Charged Particle Measurement Experiment (CPME) provides 10.12 second resolution ion and electron count rates as well as 5.5 minute or longer averages of the same, with data sampled in the ecliptic plane. The high time resolution of the data allows for an explicit, point by point, merging of the magnetic field and particle data and thus a close examination of the pre- and post-shock conditions and particle fluxes associated with large angle oblique shocks in the interplanetary field. A computer simulation has been developed wherein sample particle trajectories, taken from observed fluxes, are allowed to interact with a planar shock either forward or backward in time. One event, the 1974 Day 312 shock, is examined in detail.

  20. The fictionalist paradigm.

    PubMed

    Paley, John

    2011-01-01

    The fictionalist paradigm is introduced, and differentiated from other paradigms, using the Lincoln & Guba template. Following an initial overview, the axioms of fictionalism are delineated by reference to standard metaphysical categories: the nature of reality, the relationship between knower and known, the possibility of generalization, the possibility of causal linkages, and the role of values in inquiry. Although a paradigm's 'basic beliefs' are arbitrary and can be assumed for any reason, in this paper the fictionalist axioms are supported with philosophical considerations, and the key differences between fictionalism, positivism, and constructivism are briefly explained. Paradigm characteristics are then derived, focusing particularly on the methodological consequences. Towards the end of the paper, various objections and misunderstandings are discussed. PMID:21143578

  1. An object-oriented, coprocessor-accelerated model for ice sheet simulations

    NASA Astrophysics Data System (ADS)

    Seddik, H.; Greve, R.

    2013-12-01

    Recently, numerous models capable of modeling the thermo-dynamics of ice sheets have been developed within the ice sheet modeling community. Their capabilities have been characterized by a wide range of features with different numerical methods (finite difference or finite element), different implementations of the ice flow mechanics (shallow-ice, higher-order, full Stokes) and different treatments for the basal and coastal areas (basal hydrology, basal sliding, ice shelves). Shallow-ice models (SICOPOLIS, IcIES, PISM, etc) have been widely used for modeling whole ice sheets (Greenland and Antarctica) due to the relatively low computational cost of the shallow-ice approximation but higher order (ISSM, AIF) and full Stokes (Elmer/Ice) models have been recently used to model the Greenland ice sheet. The advance in processor speed and the decrease in cost for accessing large amount of memory and storage have undoubtedly been the driving force in the commoditization of models with higher capabilities, and the popularity of Elmer/Ice (http://elmerice.elmerfem.com) with an active user base is a notable representation of this trend. Elmer/Ice is a full Stokes model built on top of the multi-physics package Elmer (http://www.csc.fi/english/pages/elmer) which provides the full machinery for the complex finite element procedure and is fully parallel (mesh partitioning with OpenMPI communication). Elmer is mainly written in Fortran 90 and targets essentially traditional processors as the code base was not initially written to run on modern coprocessors (yet adding support for the recently introduced x86 based coprocessors is possible). Furthermore, a truly modular and object-oriented implementation is required for quick adaptation to fast evolving capabilities in hardware (Fortran 2003 provides an object-oriented programming model while not being clean and requiring a tricky refactoring of Elmer code). In this work, the object-oriented, coprocessor-accelerated finite element

  2. Estimation of photoneutron yield in linear accelerator with different collimation systems by Geant4 and MCNPX simulation codes

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Sang; Khazaei, Zeinab; Ko, Junho; Afarideh, Hossein; Ghergherehchi, Mitra

    2016-04-01

    At present, the bremsstrahlung photon beams produced by linear accelerators are the most commonly employed method of radiotherapy for tumor treatments. A photoneutron source based on three different energies (6, 10 and 15 MeV) of a linac electron beam was designed by means of Geant4 and Monte Carlo N-Particle eXtended (MCNPX) simulation codes. To obtain maximum neutron yield, two arrangements for the photo neutron convertor were studied: (a) without a collimator, and (b) placement of the convertor after the collimator. The maximum photon intensities in tungsten were 0.73, 1.24 and 2.07 photon/e at 6, 10 and 15 MeV, respectively. There was no considerable increase in the photon fluence spectra from 6 to 15 MeV at the optimum thickness between 0.8 mm and 2 mm of tungsten. The optimum dimensions of the collimator were determined to be a length of 140 mm with an aperture of 5 mm  ×  70 mm for iron in a slit shape. According to the neutron yield, the best thickness obtained for the studied materials was 30 mm. The number of neutrons generated in BeO achieved the maximum value at 6 MeV, unlike that in Be, where the highest number of neutrons was observed at 15 MeV. Statistical uncertainty in all simulations was less than 0.3% and 0.05% for MCNPX and the standard electromagnetic (EM) physics packages of Geant4, respectively. Differences among spectra in various regions are due to various cross-section and stopping power data and different simulations of the physics processes.

  3. Estimation of photoneutron yield in linear accelerator with different collimation systems by Geant4 and MCNPX simulation codes.

    PubMed

    Kim, Yoon Sang; Khazaei, Zeinab; Ko, Junho; Afarideh, Hossein; Ghergherehchi, Mitra

    2016-04-01

    At present, the bremsstrahlung photon beams produced by linear accelerators are the most commonly employed method of radiotherapy for tumor treatments. A photoneutron source based on three different energies (6, 10 and 15 MeV) of a linac electron beam was designed by means of Geant4 and Monte Carlo N-Particle eXtended (MCNPX) simulation codes. To obtain maximum neutron yield, two arrangements for the photo neutron convertor were studied: (a) without a collimator, and (b) placement of the convertor after the collimator. The maximum photon intensities in tungsten were 0.73, 1.24 and 2.07 photon/e at 6, 10 and 15 MeV, respectively. There was no considerable increase in the photon fluence spectra from 6 to 15 MeV at the optimum thickness between 0.8 mm and 2 mm of tungsten. The optimum dimensions of the collimator were determined to be a length of 140 mm with an aperture of 5 mm  ×  70 mm for iron in a slit shape. According to the neutron yield, the best thickness obtained for the studied materials was 30 mm. The number of neutrons generated in BeO achieved the maximum value at 6 MeV, unlike that in Be, where the highest number of neutrons was observed at 15 MeV. Statistical uncertainty in all simulations was less than 0.3% and 0.05% for MCNPX and the standard electromagnetic (EM) physics packages of Geant4, respectively. Differences among spectra in various regions are due to various cross-section and stopping power data and different simulations of the physics processes. PMID:26975304

  4. Vlasov simulation of laser-driven shock acceleration and ion turbulence

    NASA Astrophysics Data System (ADS)

    Grassi, A.; Fedeli, L.; Sgattoni, A.; Macchi, A.

    2016-03-01

    We present a Vlasov, i.e. a kinetic Eulerian simulation study of nonlinear collisionless ion-acoustic shocks and solitons excited by an intense laser interacting with an overdense plasma. The use of the Vlasov code avoids problems with low particle statistics and allows a validation of particle-in-cell results. A simple, original correction to the splitting method for the numerical integration of the Vlasov equation has been implemented in order to ensure the charge conservation in the relativistic regime. We show that the ion distribution is affected by the development of a turbulence driven by the relativistic ‘fast’ electron bunches generated at the laser-plasma interaction surface. This leads to the onset of ion reflection at the shock front in an initially cold plasma where only soliton solutions without ion reflection are expected to propagate. We give a simple analytical model to describe the onset of the turbulence as a nonlinear coupling of the ion density with the fast electron currents, taking the pulsed nature of the relativistic electron bunches into account.

  5. A 3d particle simulation code for heavy ion fusion accelerator studies

    SciTech Connect

    Friedman, A.; Bangerter, R.O.; Callahan, D.A.; Grote, D.P.; Langdon, A.B. ); Haber, I. )

    1990-06-08

    We describe WARP, a new particle-in-cell code being developed and optimized for ion beam studies in true geometry. We seek to model transport around bends, axial compression with strong focusing, multiple beamlet interaction, and other inherently 3d processes that affect emittance growth. Constraints imposed by memory and running time are severe. Thus, we employ only two 3d field arrays ({rho} and {phi}), and difference {phi} directly on each particle to get E, rather than interpolating E from three meshes; use of a single 3d array is feasible. A new method for PIC simulation of bent beams follows the beam particles in a family of rotated laboratory frames, thus straightening'' the bends. We are also incorporating an envelope calculation, an (r, z) model, and 1d (axial) model within WARP. The BASIS development and run-time system is used, providing a powerful interactive environment in which the user has access to all variables in the code database. 10 refs., 3 figs.

  6. Interpretation of atomic motion in flexible molecules: Accelerating molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Omelyan, Igor; Kovalenko, Andriy

    2012-02-01

    We propose a new approach to split up the velocities of atoms of flexible molecules into translational, rotational, and vibrational components. As a result, the kinetic energy of the system can easily be expressed in terms of only three parts related to the above components. This is distinct from the standard Eckart method, where the cumbersome Coriolis contribution to the kinetic energy appears additionally. The absence of such a contribution within the proposed approach allows us to readily extend the microcanonical multiple-time-step dynamics of flexible molecules to the canonical-isokinetic Nosé-Hoover chain ensemble by explicitly integrating the translational, orientational, and vibrational motion. The previous extensions dealt exclusively with translational degrees of freedom of separate atoms, leading to a limitation on the size of the outer time step of 100 femtoseconds. We show on molecular dynamics simulations of the flexible TIP3P water model that the new canonical-isokinetic formulation gives a possibility to significantly overcome this limitation. In particular, huge outer time steps of order from a few hundred femtoseconds up to several picoseconds can now be employed to study conformational properties without loss of accuracy.

  7. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  8. Accelerating the numerical simulation of magnetic field lines in tokamaks using the GPU

    SciTech Connect

    Kalling, RC; Evans, T.E.; Orlov, D. M.; Schissel, D.; Maingi, Rajesh; Menard, J.; Sabbagh, S. A.

    2011-01-01

    TRIP3D is a field line simulation code that numerically integrates a set of nonlinear magnetic field line differential equations. The code is used to study properties of magnetic islands and stochastic or chaotic field line topologies that are important for designing non-axisymmetric magnetic perturbation coils for controlling plasma instabilities in future machines. The code is very computationally intensive and for large runs can take on the order of days to complete on a traditional single CPU. This work describes how the code was converted from Fortran to C and then restructured to take advantage of GPU computing using NVIDIA's CUDA. The reduction in computing time has been dramatic where runs that previously took clays now take hours allowing a scale of problem to be examined that would previously not have been attempted. These gains have been accomplished without significant hardware expense. Performance, correctness, code flexibility, and implementation time have been analyzed to gauge the success and applicability of these methods when compared to the traditional multi-CPU approach.

  9. Accelerated circumferential strain quantification of the left ventricle using CIRCOME: simulation and factor analysis

    NASA Astrophysics Data System (ADS)

    Moghaddam, Abbas N.; Finn, J. Paul

    2008-03-01

    Circumferential strain of the left ventricle reflects myocardial contractility and is considered a key index of cardiac function. It is also an important parameter in the quantitative evaluation of heart failure. Circumferential compression encoding, CIRCOME, is a novel method in cardiac MRI to evaluate this strain non-invasively and quickly. This strain encoding technique avoids the explicit measurement of the displacement field and does not require calculation of strain through spatial differentiation. CIRCOME bypasses these two time-consuming and noise sensitive steps by directly using the frequency domain (k-space) information from radially tagged myocardium, before and after deformation. It uses the ring-shaped crown region of the k-space, generated by the taglines, to reconstruct circumferentially compression-weighted images of the heart before and after deformation. CIRCOME then calculates the circumferential strain through relative changes in the compression level of corresponding regions before and after deformation. This technique can be implemented in 3D as well as 2D and may be employed to estimate the overall global or regional circumferential strain. The main parameters that affect the accuracy of this method are spatial resolution, signal to noise ratio, eccentricity of the center of radial taglines their fading and their density. Also, a variety of possible image reconstruction and filtering options may influence the accuracy of the method. This study describes the pulse sequence, algorithm, influencing factors and limiting criteria for CIRCOME and provides the simulated results.

  10. In view of accelerating CFD simulations through coupling with vortex particle approximations

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Voutsinas, Spyros G.

    2014-06-01

    In order to exploit the capabilities of Computational Fluid Dynamics in aerodynamic design, the cost should be reduced without compromising accuracy and consistency. In this direction a hybrid methodology is formulated within the context of domain decomposition. The strategy is to choose in each sub-domain the best performing method. Close to solid boundaries a grid-based Eulerian flow solver is used while in the far field the flow is described in Lagrangian coordinates using particle approximations. Aiming at consistently including compressible effects, particles carry mass, dilatation, vorticity and energy and the complete set of conservation laws is solved in Lagrangian coordinates. At software level, the URANS solver MaPFlow is coupled to the vortex code GENUVP. In the present paper the two dimensional formulation is given alongside with validation tests around airfoils in steady and inherently unsteady conditions. It is verified that: purely Eulerian and hybrid simulations are equivalent; the Eulerian domain in the hybrid solver can be effectively restricted to a layer 1.5 chord lengths wide; significant cost reduction reaching up to 1:3 ratio is achieved.

  11. Wearable ECG recorder with acceleration sensors for monitoring daily stress: office work simulation study.

    PubMed

    Okada, Y; Yoto, T Y; Suzuki, T; Sakuragawa, S; Sugiura, T

    2013-01-01

    A small and light-weight wearable electrocardiograph (ECG) equipment with a tri-axis accelerometer (x, y and z-axis) was developed for prolonged monitoring of everyday stress. It consists of an amplifier, a microcomputer with an AD converter, a triaxial accelerometer, and a memory card. Four parameters can be sampled at 1 kHz for more than 24 h and a maximum of 27 h with a default battery and a memory card of one giga byte (1 GB). Off-line data processing includes motion information along three axes and autonomic nervous system (ANS) activity bispectral analysis and the tone-entropy method (T-E method) from HRV data. The availability of the system was tested through simulated office work and three-day monitoring by replacing the battery and the memory card every 24 h. Both short-term and circadian rhythms of ANS activity were clearly observed. In addition, sympathetic nervous activities gradually increased from the second to the third day. The experimental data presented verifies the functionality of the proposed system. PMID:24110788

  12. Particle Acceleration At Small-Scale Flux Ropes In The Heliosphere

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Li, G.; Webb, G. M.; Khabarova, O.; Cummings, A. C.; Stone, E. C.; Decker, R. B.

    2015-12-01

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands or flux roped. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We discuss the basic physics of particle acceleration by single magnetic islands and describe how to incorporate these ideas in a distributed "sea of magnetic islands". We describe briefly some observations, selected simulations, and then introduce a transport approach for describing particle acceleration at small-scale flux ropes. We discuss particle acceleration in the supersonic solar wind and extend these ideas to particle acceleration at shock waves. These models are appropriate to the acceleration of both electrons and ions. We describe model predictions and supporting observations.

  13. Paradigm Change in Political Science.

    ERIC Educational Resources Information Center

    Rodman, John

    1980-01-01

    Traces the shift of paradigms in the political science profession from the 1960s to 1980, examines the classical paradigm, compares it with modern paradigms, and reviews contemporary efforts to articulate a new paradigm which takes the ecological crisis into account. (Author/DB)

  14. Scattering parameters of the 3.9Â GHz accelerating module in a free-electron laser linac: A rigorous comparison between simulations and measurements

    NASA Astrophysics Data System (ADS)

    Flisgen, Thomas; Glock, Hans-Walter; Zhang, Pei; Shinton, Ian R. R.; Baboi, Nicoleta; Jones, Roger M.; van Rienen, Ursula

    2014-02-01

    This article presents a comparison between measured and simulated scattering parameters in a wide frequency interval for the third harmonic accelerating module ACC39 in the linear accelerator FLASH, located at DESY in Hamburg/Germany. ACC39 is a cryomodule housing four superconducting 3.9 GHz accelerating cavities. Due to the special shape of the cavities (in particular its end cells and the beam pipes) in ACC39, the electromagnetic field in the module is, in many frequency ranges, coupled from one cavity to the next. Therefore, the scattering parameters are determined by the entire string and not solely by the individual cavities. This makes the determination of the scattering properties demanding. As far as the authors can determine, this paper shows for the first time a direct comparison between state-of-the-art simulations and measurements of rf properties of long, complex, and asymmetric structures over a wide frequency band. Taking into account the complexity of the system and various geometrical unknowns, the agreement between experimental measurements and simulations is remarkably good for several distinct measurements, although a variety of effects (e.g. cavity deviations from the ideal shape or interactions with not modeled parts of the structure) is not considered in the computer simulation. After a short introduction, the paper provides detailed descriptions of simulations and experimental measurements performed at the module. In this context, the estimation of the cable properties is discussed as well. As a central part of the article, the comparison between measured and simulated transmission spectra and quality factors is presented. This study represents one of the first detailed comparisons between simulations and measurements for a coupled accelerator cavity system.

  15. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. PMID:15934196

  16. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS

    NASA Astrophysics Data System (ADS)

    Sato, T.; Sihver, L.; Iwase, H.; Nakashima, H.; Niita, K.

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4°.

  17. GPU accelerated particle visualization with Splotch

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.

    2014-07-01

    Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.

  18. Beam dynamics simulations of the transverse-to-longitudinal emittance exchange proof-of-principle experiment at the Argonne Wakefield Accelerator

    SciTech Connect

    Rihaoui, M.; Gai, W.; Kim, K.J.; Piot, Philippe; Power, John Gorham; Sun, Y.E.; /Fermilab

    2009-01-01

    Transverse-to-longitudinal emittance exchange has promising applications in various advanced acceleration and light source concepts. A proof-of-principle experiment to demonstrate this phase space manipulation method is currently being planned at the Argonne Wakefield Accelerator. The experiment focuses on exchanging a low longitudinal emittance with a high transverse horizontal emittance and also incorporates room for possible parametric studies e.g. using an incoming flat beam with tunable horizontal emittance. In this paper, we present realistic start-to-end beam dynamics simulation of the scheme, explore the limitations of this phase space exchange.

  19. Beam dynamics simulations of the transverse-to-longitudinal emittance exchange proof-of-principle experiment at the Argonne Wakefield Accelerator.

    SciTech Connect

    Gao, F.; Gai, W.; Power, J. G.; Kim, K. J.; Sun, Y. E.; Piot, P.; Rihaoui, M.; High Energy Physics; Northern Illinois Univ.; FNAL

    2009-01-01

    Transverse-to-longitudinal emittance exchange has promising applications in various advanced acceleration and light source concepts. A proof-of-principle experiment to demonstrate this phase space manipulation method is currently being planned at the Argonne Wakefield Accelerator. The experiment focuses on exchanging a low longitudinal emittance with a high transverse horizontal emittance and also incorporates room for possible parametric studies e.g. using an incoming flat beam with tunable horizontal emittance. In this paper, we present realistic start-to-end beam dynamics simulation of the scheme, explore the limitations of this phase space exchange.

  20. Beam dynamics simulations of the transverse-to-longitudinal emittance exchange proof-of-principle experiment at the Argonne Wakefield Accelerator

    SciTech Connect

    Rihaoui, M.; Gai, W.; Kim, K.-J.; Power, J. G.; Piot, P.; Sun, Y.-E.

    2009-01-22

    Transverse-to-longitudinal emittance exchange has promising applications in various advanced acceleration and light source concepts. A proof-of-principle experiment to demonstrate this phase space manipulation method is currently being planned at the Argonne Wakefield Accelerator. The experiment focuses on exchanging a low longitudinal emittance with a high transverse horizontal emittance and also incorporates room for possible parametric studies e.g. using an incoming flat beam with tunable horizontal emittance. In this paper, we present realistic start-to-end beam dynamics simulation of the scheme, explore the limitations of this phase space exchange.

  1. A GPU accelerated, discrete time random walk model for simulating reactive transport in porous media using colocation probability function based reaction methods

    NASA Astrophysics Data System (ADS)

    Barnard, J. M.; Augarde, C. E.

    2012-12-01

    The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.

  2. Alternative Evaluation Research Paradigm.

    ERIC Educational Resources Information Center

    Patton, Michael Quinn

    This monograph is one of a continuing series initiated to provide materials for teachers, parents, school administrators, and governmental decision-makers that might encourage reexamination of a range of evaluation issues and perspectives about schools and schooling. This monograph is a description and analysis of two contrasting paradigms: one…

  3. Deconstructing Research: Paradigms Lost

    ERIC Educational Resources Information Center

    Trifonas, Peter Pericles

    2009-01-01

    In recent decades, proponents of naturalistic and/or critical modes of inquiry advocating the use of ethnographic techniques for the narrative-based study of phenomena within pedagogical contexts have challenged the central methodological paradigm of educational research: that is, the tendency among its practitioners to adhere to quantitative…

  4. Paradigms of School Change

    ERIC Educational Resources Information Center

    Wrigley, Terry

    2011-01-01

    This short paper points to some paradigm issues in the field of school development (leadership, effectiveness, improvement) and their relationship to social justice. It contextualises the dominant School Effectiveness and School Improvement models within neo-liberal marketisation, paying attention to their transformation through a "marriage of…

  5. The "New Environmental Paradigm"

    ERIC Educational Resources Information Center

    Dunlap, Riley E.; Van Liere, Kent D.

    2008-01-01

    The "New Environmental Paradigm" or NEP appears to have gained considerable popularity in academic and intellectual circles, as well as among many college students; however, very little is known concerning the degree to which the general public has come to accept the ideas embodied in it. Thus, although there have been dozens of studies of…

  6. Telemedicine: a new paradigm.

    PubMed

    Denton, I

    1993-11-01

    Technological innovations sometimes compel paradigm shifts. Two decades ago the CT scanner was one such phenomenon: It quickly and absolutely transformed medical practice. A marriage of medicine and telecommunications could engender a similar transformation. The timing surely is favorable, as pressing requirements of healthcare reform coincide with the flowering of telecommunications technologies. PMID:10130474

  7. TURBULENT SHEAR ACCELERATION

    SciTech Connect

    Ohira, Yutaka

    2013-04-10

    We consider particle acceleration by large-scale incompressible turbulence with a length scale larger than the particle mean free path. We derive an ensemble-averaged transport equation of energetic charged particles from an extended transport equation that contains the shear acceleration. The ensemble-averaged transport equation describes particle acceleration by incompressible turbulence (turbulent shear acceleration). We find that for Kolmogorov turbulence, the turbulent shear acceleration becomes important on small scales. Moreover, using Monte Carlo simulations, we confirm that the ensemble-averaged transport equation describes the turbulent shear acceleration.

  8. Three-dimensional simulations of the non-thermal broadband emission from young supernova remnants including efficient particle acceleration

    SciTech Connect

    Ferrand, Gilles; Safi-Harb, Samar; Decourchelle, Anne E-mail: samar@physics.umanitoba.ca

    2014-07-01

    Supernova remnants are believed to be major contributors to Galactic cosmic rays. In this paper, we explore how the non-thermal emission from young remnants can be used to probe the production of energetic particles at the shock (both protons and electrons). Our model couples hydrodynamic simulations of a supernova remnant with a kinetic treatment of particle acceleration. We include two important back-reaction loops upstream of the shock: energetic particles can (1) modify the flow structure and (2) amplify the magnetic field. As the latter process is not fully understood, we use different limit cases that encompass a wide range of possibilities. We follow the history of the shock dynamics and of the particle transport downstream of the shock, which allows us to compute the non-thermal emission from the remnant at any given age. We do this in three dimensions, in order to generate projected maps that can be compared with observations. We observe that completely different recipes for the magnetic field can lead to similar modifications of the shock structure, although to very different configurations of the field and particles. We show how this affects the emission patterns in different energy bands, from radio to X-rays and γ-rays. High magnetic fields (>100 μG) directly impact the synchrotron emission from electrons, by restricting their emission to thin rims, and indirectly impact the inverse Compton emission from electrons and also the pion decay emission from protons, mostly by shifting their cut-off energies to respectively lower and higher energies.

  9. Simulation of the hot rolling and accelerated cooling of a C-Mn ferrite-bainite strip steel

    NASA Astrophysics Data System (ADS)

    Debray, B.; Teracher, P.; Jonas, J. J.

    1995-01-01

    By means of torsion testing, the microstructures and mechanical properties produced in a 0.14 Pct C-1.18 Pct Mn steel were investigated over a wide range of hot-rolling conditions, cooling rates, and simulated coiling temperatures. The austenite grain size present before accelerated cooling was varied from 10 to 150 μm by applying strains of 0 to 0.8 at temperatures of 850 °C to 1050 °C. Two cooling rates, 55 °C/s and 90 °C/s, were used. Cooling was interrupted at temperatures ranging from 550 °C to 300 °C. Optical microscopy and transmission electron microscopy (TEM) were employed to investigate the microstructures. The mechanical properties were studied by means of tensile testing. When a fine austenite grain size was present before cooling and a high cooling rate (90 °C/s) was used, the microstructure was composed of ferrite plus bainite and a mixture of ferrite and cementite, which may have formed by an interphase mechanism. The use of a lower cooling rate (55 °C/s) led to the presence of ferrite and fine pearlite. In both cases, the cooling interruption temperature and the amount of prior strain had little influence on the mechanical properties. Reheating at 1050 °C, which led to the presence of very coarse austenite, resulted in a stronger influence of the interruption temperature. A method developed at Institut de Recherche Sidérurgique (IRSID, St. Germain-en-Laye, France) for deducing the Continuous-Cooling-Transformation (CCT) diagrams from the cooling data was adapted to the present apparatus and used successfully to interpret the observed influence of the process parameters.

  10. The Nature of Paradigms and Paradigm Shifts in Music Education

    ERIC Educational Resources Information Center

    Panaiotidi, Elvira

    2005-01-01

    In this paper, the author attempts to extend the paradigm approach into the philosophy of music education and to build upon this basis a model for structuring music education discourse. The author begins with an examination of Peter Abbs' account of paradigms and paradigm shifts in arts education. Then she turns to Kuhn's conception and to his…

  11. Molecular dynamics simulation of organometallic reaction dynamics, and, Enhancing achievement in chemistry for African American students through innovations in pedagogy aligned with supporting assessment and curriculum and integrated under an alternative research paradigm

    NASA Astrophysics Data System (ADS)

    Mebane, Sheryl Dee

    Part I. Molecular dynamics simulation of organometallic reaction dynamics. To study the interplay of solute and solvent dynamics, large-scale molecular dynamics simulations were employed. Lennard-Jones and electrostatic models of potential energies from solvent-only studies were combined with solute potentials generated from ab-initio calculations. Radial distribution functions and other measures revealed the polar solvent's response to solute dynamics following CO dissociation. In future studies, the time-scale for solvent coordination will be confirmed with ultrafast spectroscopy data. Part II. Enhancing achievement in chemistry for African American students through innovations in pedagogy aligned with supporting assessment and curriculum and integrated under an alternative research paradigm. Much progress has been made in the area of research in education that focuses on teaching and learning in science. Much effort has also centered on documenting and exploring the disparity in academic achievement between underrepresented minority students and students comprising a majority in academic circles. However, few research projects have probed educational inequities in the context of mainstream science education. In order to enrich this research area and to better reach underserved learning communities, the educational experience of African American students in an ethnically and academically diverse high school science class has been examined throughout one, largely successful, academic year. The bulk of data gathered during the study was obtained through several qualitative research methods and was interpreted using research literature that offered fresh theoretical perspectives on equity that may better support effective action.

  12. Self-consistent Monte Carlo simulations of proton acceleration in coronal shocks: Effect of anisotropic pitch-angle scattering of particles

    NASA Astrophysics Data System (ADS)

    Afanasiev, A.; Battarbee, M.; Vainio, R.

    2015-12-01

    Context. Solar energetic particles observed in association with coronal mass ejections (CMEs) are produced by the CME-driven shock waves. The acceleration of particles is considered to be due to diffusive shock acceleration (DSA). Aims: We aim at a better understanding of DSA in the case of quasi-parallel shocks, in which self-generated turbulence in the shock vicinity plays a key role. Methods: We have developed and applied a new Monte Carlo simulation code for acceleration of protons in parallel coronal shocks. The code performs a self-consistent calculation of resonant interactions of particles with Alfvén waves based on the quasi-linear theory. In contrast to the existing Monte Carlo codes of DSA, the new code features the full quasi-linear resonance condition of particle pitch-angle scattering. This allows us to take anisotropy of particle pitch-angle scattering into account, while the older codes implement an approximate resonance condition leading to isotropic scattering. We performed simulations with the new code and with an old code, applying the same initial and boundary conditions, and have compared the results provided by both codes with each other, and with the predictions of the steady-state theory. Results: We have found that anisotropic pitch-angle scattering leads to less efficient acceleration of particles than isotropic. However, extrapolations to particle injection rates higher than those we were able to use suggest the capability of DSA to produce relativistic particles. The particle and wave distributions in the foreshock as well as their time evolution, provided by our new simulation code, are significantly different from the previous results and from the steady-state theory. Specifically, the mean free path in the simulations with the new code is increasing with energy, in contrast to the theoretical result.

  13. Finite Element Method Simulation of a New One-Chip-Style Quartz Crystal Motion Sensor with Two Functions of Gyro and Acceleration Detection

    NASA Astrophysics Data System (ADS)

    Koitabashi, Tatsuo; Kudo, Seiichi; Okada, Shigeya; Tomikawa, Yoshiro

    2001-09-01

    In this study, a new one-chip-style quartz crystal motion sensor which detects one-axis angular velocity and one-axis acceleration is proposed. Some characteristics of the sensor are simulated by the finite element method, along with some simulations of vibrational characteristics. This sensor is aimed to be used as a small wristwatch-type instrumentation unit to monitor some motions of the human body. The dimensions of the prototype sensor are 16 mm in length, 6 mm in width and 0.3 mm in thickness. The sensor consists of two parts with different functions; one part is a flatly supported vibratory gyrosensor using a quartz crystal trident-type tuning fork resonator and the other is a frequency-changeable type acceleration sensor. The results of simulations show, that the gyrosensor part has a good linearity of sensitivity, although it is also sensitive to an angular velocity which could not be detected fundamentally. It also has a good linearity of sensitivity for detection of acceleration.

  14. Montecarlo simulation code in optimisation of the IntraOperative Radiation Therapy treatment with mobile dedicated accelerator

    NASA Astrophysics Data System (ADS)

    Catalano, M.; Agosteo, S.; Moretti, R.; Andreoli, S.

    2007-06-01

    The principle of optimisation of the EURATOM 97/43 directive foresees that for all medical exposure of individuals for radiotherapeutic purposes, exposures of target volumes shall be individually planned, taking into account that doses of non-target volumes and tissues shall be as low as reasonably achievable and consistent with the intended radiotherapeutic purpose of the exposure. Treatment optimisation has to be carried out especially in non conventional radiotherapic procedures, as Intra Operative Radiation Therapy (IORT) with mobile dedicated LINear ACcelerator (LINAC), which does not make use of a Treatment Planning System. IORT is carried out with electron beams and refers to the application of radiation during a surgical intervention, after the removal of a neoplastic mass and it can also be used as a one-time/stand alone treatment in initial cancer of small volume. IORT foresees a single session and a single beam only; therefore it is necessary to use protection systems (disks) temporary positioned between the target volume and the underlying tissues, along the beam axis. A single high Z shielding disk is used to stop the electrons of the beam at a certain depth and protect the tissues located below. Electron back scatter produces an enhancement in the dose above the disk, and this can be reduced if a second low Z disk is placed above the first. Therefore two protection disks are used in clinical application. On the other hand the dose enhancement at the interface of the high Z disk and the target, due to back scattering radiation, can be usefully used to improve the uniformity in treatment of thicker target volumes. Furthermore the dose above the disks of different Z material has to be evaluated in order to study the optimal combination of shielding disks that allow both to protect the underlying tissues and to obtain the most uniform dose distribution in target volumes of different thicknesses. The dose enhancement can be evaluated using the electron

  15. Computer simulation of rocket/missile safing and arming mechanism (containing pin pallet runaway escapement, three-pass involute gear train and acceleration driven rotor)

    NASA Astrophysics Data System (ADS)

    Gorman, P. T.; Tepper, F. R.

    1986-03-01

    A complete simulation of missile and rocket safing and arming (S&A) mechanisms containing an acceleration-driven rotor, a three-pass involute gear train, and a pin pallet runaway escapement was developed. In addition, a modification to this simulation was formulated for the special case of the PATRIOT M143 S&A mechanism which has a pair of driving gears in addition to the three-pass gear train. The three motion regimes involved in escapement operation - coupled motion, free motion, and impact - are considered in the computer simulation. The simulation determines both the arming time of the device and the non-impact contact forces of all interacting components. The program permits parametric studies to be made, and is capable of analyzing pallets with arbitrarily located centers of mass. A sample simulation of the PATRIOT M143 S&A in an 11.9 g constant acceleration arming test was run. The results were in good agreement with laboratory test data.

  16. Parental reflective functioning is associated with tolerance of infant distress but not general distress: Evidence for a specific relationship using a simulated baby paradigm

    PubMed Central

    Rutherford, Helena J.V.; Goldberg, Benjamin; Luyten, Patrick; Bridgett, David J.; Mayes, Linda C.

    2013-01-01

    Parental reflective functioning represents the capacity of a parent to think about their own and their child’s mental states and how these mental states may influence behavior. Here we examined whether this capacity as measured by the Parental Reflective Functioning Questionnaire relates to tolerance of infant distress by asking mothers (N=21) to soothe a life-like baby simulator (BSIM) that was inconsolable, crying for a fixed time period unless the mother chose to stop the interaction. Increasing maternal interest and curiosity in their child’s mental states, a key feature of parental reflective functioning, was associated with longer persistence times with the BSIM. Importantly, on a non-parent distress tolerance task, parental reflective functioning was not related to persistence times. These findings suggest that parental reflective functioning may be related to tolerance of infant distress, but not distress tolerance more generally, and thus may reflect specificity to parenting-specific persistence behavior. PMID:23906942

  17. Effects of Turbulent Magnetic Fields on the Transport and Acceleration of Energetic Charged Particles: Numerical Simulations with Application to Heliospheric Physics

    NASA Astrophysics Data System (ADS)

    Guo, Fan

    2012-11-01

    Turbulent magnetic fields are ubiquitous in space physics and astrophysics. The influence of magnetic turbulence on the motions of charged particles contains the essential physics of the transport and acceleration of energetic charged particles in the heliosphere, which is to be explored in this thesis. After a brief introduction on the energetic charged particles and magnetic fields in the heliosphere, the rest of this dissertation focuses on three specific topics: 1. the transport of energetic charged particles in the inner heliosphere, 2. the acceleration of ions at collisionless shocks, and 3. the acceleration of electrons at collisionless shocks. We utilize various numerical techniques to study these topics. In Chapter 2 we study the propagation of charged particles in turbulent magnetic fields similar to the propagation of solar energetic particles in the inner heliosphere. The trajectories of energetic charged particles in the turbulent magnetic field are numerically integrated. The turbulence model includes a Kolmogorov-like magnetic field power spectrum containing a broad range of scales from those that lead to large-scale field-line random walk to small scales leading to resonant pitch-angle scattering of energetic particles. We show that small-scale variations in particle intensities (the so-called "dropouts") and velocity dispersions observed by spacecraft can be reproduced using this method. Our study gives a new constraint on the error of "onset analysis", which is a technique commonly used to infer information about the initial release of energetic particles. We also find that the dropouts are rarely produced in the simulations using the so-called "two-component" magnetic turbulence model (Matthaeus et al., 1990). The result questions the validity of this model in studying particle transport. In the first part of Chapter 3 we study the acceleration of ions in the existence of turbulent magnetic fields. We use 3-D self-consistent hybrid simulations

  18. Three dimensional particle-in-cell simulation of particle acceleration by circularly polarised inertial Alfven waves in a transversely inhomogeneous plasma

    SciTech Connect

    Tsiklauri, D.

    2012-08-15

    The process of particle acceleration by left-hand, circularly polarised inertial Alfven waves (IAW) in a transversely inhomogeneous plasma is studied using 3D particle-in-cell simulation. A cylindrical tube with, transverse to the background magnetic field, inhomogeneity scale of the order of ion inertial length is considered on which IAWs with frequency 0.3{omega}{sub ci} are launched that are allowed to develop three wavelength. As a result time-varying parallel electric fields are generated in the density gradient regions which accelerate electrons in the parallel to magnetic field direction. Driven perpendicular electric field of IAWs also heats ions in the transverse direction. Such numerical setup is relevant for solar flaring loops and earth auroral zone. This first, 3D, fully kinetic simulation demonstrates electron acceleration efficiency in the density inhomogeneity regions, along the magnetic field, of the order of 45% and ion heating, in the transverse to the magnetic field direction, of 75%. The latter is a factor of two times higher than the previous 2.5D analogous study and is in accordance with solar flare particle acceleration observations. We find that the generated parallel electric field is localised in the density inhomogeneity region and rotates in the same direction and with the same angular frequency as the initially launched IAW. Our numerical simulations seem also to suggest that the 'knee' often found in the solar flare electron spectra can alternatively be interpreted as the Landau damping (Cerenkov resonance effect) of IAWs due to the wave-particle interactions.

  19. Simulation of Energetic Particle Transport and Acceleration at Shock Waves in a Focused Transport Model: Implications for Mixed Solar Particle Events

    NASA Astrophysics Data System (ADS)

    Kartavykh, Y. Y.; Dröge, W.; Gedalin, M.

    2016-03-01

    We use numerical solutions of the focused transport equation obtained by an implicit stochastic differential equation scheme to study the evolution of the pitch-angle dependent distribution function of protons in the vicinity of shock waves. For a planar stationary parallel shock, the effects of anisotropic distribution functions, pitch-angle dependent spatial diffusion, and first-order Fermi acceleration at the shock are examined, including the timescales on which the energy spectrum approaches the predictions of diffusive shock acceleration theory. We then consider the case that a flare-accelerated population of ions is released close to the Sun simultaneously with a traveling interplanetary shock for which we assume a simplified geometry. We investigate the consequences of adiabatic focusing in the diverging magnetic field on the particle transport at the shock, and of the competing effects of acceleration at the shock and adiabatic energy losses in the expanding solar wind. We analyze the resulting intensities, anisotropies, and energy spectra as a function of time and find that our simulations can naturally reproduce the morphologies of so-called mixed particle events in which sometimes the prompt and sometimes the shock component is more prominent, by assuming parameter values which are typically observed for scattering mean free paths of ions in the inner heliosphere and energy spectra of the flare particles which are injected simultaneously with the release of the shock.

  20. Exploring laser-wakefield-accelerator regimes for near-term lasers using particle-in-cell simulation in Lorentz-boosted frames

    NASA Astrophysics Data System (ADS)

    Martins, S. F.; Fonseca, R. A.; Lu, W.; Mori, W. B.; Silva, L. O.

    2010-04-01

    Plasma-based acceleration offers compact accelerators with potential applications for high-energy physics and photon sources. The past five years have seen an explosion of experimental results with monoenergetic electron beams up to 1GeV on a centimetre-scale, using plasma waves driven by intense lasers. The next decade will see tremendous increases in laser power and energy, permitting beam energies beyond 10GeV. Leveraging on the Lorentz transformations to bring the laser and plasma spatial scales together, we have reduced the computational time for modelling laser-plasma accelerators by several orders of magnitude, including all the relevant physics. This scheme enables the first one-to-one particle-in-cell simulations of the next generation of accelerators at the energy frontier. Our results demonstrate that, for a given laser energy, choices in laser and plasma parameters strongly affect the output electron beam energy, charge and quality, and that all of these parameters can be optimized.