NASA Technical Reports Server (NTRS)
1998-01-01
Under a NASA SBIR (Small Business Innovative Research) contract, (NAS5-30905), EAI Simulation Associates, Inc., developed a new digital simulation computer, Starlight(tm). With an architecture based on the analog model of computation, Starlight(tm) outperforms all other computers on a wide range of continuous system simulation. This system is used in a variety of applications, including aerospace, automotive, electric power and chemical reactors.
Hardware Accelerated Simulated Radiography
Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R
2005-04-12
We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists.
Accelerator simulation using computers
Lee, M.; Zambre, Y.; Corbett, W.
1992-01-01
Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a multi-track'' simulation and analysis code can be used for these applications.
Accelerator simulation using computers
Lee, M.; Zambre, Y.; Corbett, W.
1992-01-01
Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a ``multi-track`` simulation and analysis code can be used for these applications.
Particle acceleration in cosmic plasmas – paradigm change?
Lytikov, Maxim; Guo, Fan
2015-07-21
The presentation begins by considering the requirements on the acceleration mechanism. It is found that at least some particles in high-energy sources are accelerated by magnetic reconnection (and not by shocks). The two paradigms can be distinguished by the hardness of the spectra. Shocks typically produce spectra with p > 2 (relativistic shocks have p ~ 2.2); non-linear shocks & drift acceleration may give p < 2, e.g. p=1.5; B-field dissipation can give p = 1. Then collapse of stressed magnetic X-point in force-free plasma and collapse of a system of magnetic islands are taken up, including Island merger: forced reconnection. Spectra as functions of sigma are shown, and gamma ~ 10^{9} is addressed. It is concluded that reconnection in magnetically-dominated plasma can proceed explosively, is an efficient means of particle acceleration, and is an important (perhaps dominant for some phenomena) mechanism of particle acceleration in high energy sources.
Accelerator simulation of astrophysical processes
NASA Technical Reports Server (NTRS)
Tombrello, T. A.
1983-01-01
Phenomena that involve accelerated ions in stellar processes that can be simulated with laboratory accelerators are described. Stellar evolutionary phases, such as the CNO cycle, have been partially explored with accelerators, up to the consumption of He by alpha particle radiative capture reactions. Further experimentation is indicated on reactions featuring N-13(p,gamma)O-14, O-15(alpha, gamma)Ne-19, and O-14(alpha,p)F-17. Accelerated beams interacting with thin foils produce reaction products that permit a determination of possible elemental abundances in stellar objects. Additionally, isotopic ratios observed in chondrites can be duplicated with accelerator beam interactions and thus constraints can be set on the conditions producing the meteorites. Data from isotopic fractionation from sputtering, i.e., blasting surface atoms from a material using a low energy ion beam, leads to possible models for processes occurring in supernova explosions. Finally, molecules can be synthesized with accelerators and compared with spectroscopic observations of stellar winds.
Hardware-Accelerated Simulated Radiography
Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R
2005-08-04
We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester.
Accelerated dynamics simulations of nanotubes.
Uberuaga, B. P.; Stuart, S. J.; Voter, A. F.
2002-01-01
We report on the application of accelerated dynamics techniques to the study of carbon nanotubes. We have used the parallel replica method and temperature accelerated dynamics simulations are currently in progress. In the parallel replica study, we have stretched tubes at a rate significantly lower than that used in previous studies. In these preliminary results, we find that there are qualitative differences in the rupture of the nanotubes at different temperatures. We plan on extending this investigation to include nanotubes of various chiralities. We also plan on exploring unique geometries of nanotubes.
Dai, Li; Zhang, Chenchen; Liu, Xiangping
2016-01-01
According to a number of studies, use of a Reading Acceleration Program as reading intervention training has been demonstrated to improve reading speed and comprehension level effectively in most languages and countries. The objective of the current study was to provide further evidence of the effectiveness of a Reading Acceleration Program for Chinese children with reading disabilities using a distinctive Chinese reading acceleration training paradigm. The reading acceleration training paradigm is divided into a non-accelerated reading paradigm, a Character-accelerated reading paradigm and a Words-accelerated reading paradigm. The results of training Chinese children with reading disabilities indicate that the acceleration reading paradigm applies to children with Chinese-reading disabilities. In addition, compared with other reading acceleration paradigms, Words-accelerated reading training is more effective in helping children with reading disabilities read at a high speed while maintaining superior comprehension levels. PMID:28018272
Changing the Paradigm: Simulation, a Method of First Resort
2011-09-01
PARADIGM: SIMULATION, A METHOD OF FIRST RESORT by Ben L. Anderson September 2011 Thesis Advisor: Thomas W. Lucas Second Reader: Devaushi...COVERED Master’s Thesis 4. TITLE AND SUBTITLE Changing the Paradigm: Simulation, a Method of First Resort 5. FUNDING NUMBERS 6. AUTHOR(S...is over 1,000,000,000 times more powerful than the first simulation pioneers had sixty years ago, yet the concept that simulation is a “ method of
Development of a neural net paradigm that predicts simulator sickness
Allgood, G.O.
1993-03-01
A disease exists that affects pilots and aircrew members who use Navy Operational Flight Training Systems. This malady, commonly referred to as simulator sickness and whose symptomatology closely aligns with that of motion sickness, can compromise the use of these systems because of a reduced utilization factor, negative transfer of training, and reduction in combat readiness. A report is submitted that develops an artificial neural network (ANN) and behavioral model that predicts the onset and level of simulator sickness in the pilots and aircrews who sue these systems. It is proposed that the paradigm could be implemented in real time as a biofeedback monitor to reduce the risk to users of these systems. The model captures the neurophysiological impact of use (human-machine interaction) by developing a structure that maps the associative and nonassociative behavioral patterns (learned expectations) and vestibular (otolith and semicircular canals of the inner ear) and tactile interaction, derived from system acceleration profiles, onto an abstract space that predicts simulator sickness for a given training flight.
Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley
2008-07-01
The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.
Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley
2011-10-21
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.
Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.; /LBL, Berkeley
2011-11-14
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization
20th Space Simulation Conference: The Changing Testing Paradigm
NASA Technical Reports Server (NTRS)
Stecher, Joseph L., III (Compiler)
1998-01-01
The Institute of Environmental Sciences' Twentieth Space Simulation Conference, "The Changing Testing Paradigm" provided participants with a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, atomic oxygen, program/system testing, dynamics testing, contamination, and materials. The papers presented at this conference and the resulting discussions carried out the conference theme "The Changing Testing Paradigm."
20th Space Simulation Conference: The Changing Testing Paradigm
NASA Technical Reports Server (NTRS)
Stecher, Joseph L., III (Compiler)
1999-01-01
The Institute of Environmental Sciences and Technology's Twentieth Space Simulation Conference, "The Changing Testing Paradigm" provided participants with a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, atomic oxygen, program/system testing, dynamics testing, contamination, and materials. The papers presented at this conference and the resulting discussions carried out the conference theme "The Changing Testing Paradigm."
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, Scott J.
2009-01-01
Previous studies have demonstrated an effect of frequency on the gain of tilt and translation perception. Results from different motion paradigms are often combined to extend the stimulus frequency range. For example, Off-Vertical Axis Rotation (OVAR) and Variable Radius Centrifugation (VRC) are useful to test low frequencies of linear acceleration at amplitudes that would require impractical sled lengths. The purpose of this study was to compare roll-tilt and lateral translation motion perception in 12 healthy subjects across four paradigms: OVAR, VRC, sled translation and rotation about an earth-horizontal axis. Subjects were oscillated in darkness at six frequencies from 0.01875 to 0.6 Hz (peak acceleration equivalent to 10 deg, less for sled motion below 0.15 Hz). Subjects verbally described the amplitude of perceived tilt and translation, and used a joystick to indicate the direction of motion. Consistent with previous reports, tilt perception gain decreased as a function of stimulus frequency in the motion paradigms without concordant canal tilt cues (OVAR, VRC and Sled). Translation perception gain was negligible at low stimulus frequencies and increased at higher frequencies. There were no significant differences between the phase of tilt and translation, nor did the phase significantly vary across stimulus frequency. There were differences in perception gain across the different paradigms. Paradigms that included actual tilt stimuli had the larger tilt gains, and paradigms that included actual translation stimuli had larger translation gains. In addition, the frequency at which there was a crossover of tilt and translation gains appeared to vary across motion paradigm between 0.15 and 0.3 Hz. Since the linear acceleration in the head lateral plane was equivalent across paradigms, differences in gain may be attributable to the presence of linear accelerations in orthogonal directions and/or cognitive aspects based on the expected motion paths.
Parallel beam dynamics simulation of linear accelerators
Qiang, Ji; Ryne, Robert D.
2002-01-31
In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies.
Transient simulation of ram accelerator flowfields
NASA Astrophysics Data System (ADS)
Drabczuk, Randall P.; Rolader, G.; Dash, S.; Sinha, N.; York, B.
1993-01-01
This paper describes the development of an advanced computational fluid dynamic (CFD) simulation capability in support of the USAF Armament Directorate ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, and high pressure thermo-chemistry. Selected ram accelerator simulations are presented that serve to exhibit the CRAFT code capabilities and identify some of the principle research/design Issues.
Transient simulation of ram accelerator flowfields
NASA Astrophysics Data System (ADS)
Sinha, N.; York, B. J.; Dash, S. M.; Drabczuk, R.; Rolader, G. E.
1992-10-01
This paper describes the development of an advanced computational fluid dynamic (CFD) simulation capability in support of the U.S. Air Force Armament Directorate's ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, high pressure thermochemistry, etc. Selected ram accelerator simulations are presented which serve to exhibit the CRAFT code's capabilities and identify some of the principal research/design issues.
Enabling technologies for petascale electromagnetic accelerator simulation
NASA Astrophysics Data System (ADS)
Lee, Lie-Quan; Akcelik, Volkan; Chen, Sheng; Ge, Lixin; Prudencio, Ernesto; Schussman, Greg; Uplenchwar, Ravi; Ng, Cho; Ko, Kwok; Luo, Xiaojun; Shephard, Mark
2007-07-01
The SciDAC2 accelerator project at SLAC aims to simulate an entire three-cryomodule radio frequency (RF) unit of the International Linear Collider (ILC) main Linac. Petascale computing resources supported by advances in Applied Mathematics (AM) and Computer Science (CS) and INCITE Program are essential to enable such very large-scale electromagnetic accelerator simulations required by the ILC Global Design Effort. This poster presents the recent advances and achievements in the areas of CS/AM through collaborations.
Enabling Technologies for Petascale Electromagnetic Accelerator Simulation
Lee, Lie-Quan; Akcelik, Volkan; Chen, Sheng; Ge, Li-Xin; Prudencio, Ernesto; Schussman, Greg; Uplenchwar, Ravi; Ng, Cho; Ko, Kwok; Luo, Xiaojun; Shephard, Mark; /Rensselaer Poly.
2007-11-09
The SciDAC2 accelerator project at SLAC aims to simulate an entire three-cryomodule radio frequency (RF) unit of the International Linear Collider (ILC) main Linac. Petascale computing resources supported by advances in Applied Mathematics (AM) and Computer Science (CS) and INCITE Program are essential to enable such very large-scale electromagnetic accelerator simulations required by the ILC Global Design Effort. This poster presents the recent advances and achievements in the areas of CS/AM through collaborations.
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Kinetic Simulations of Particle Acceleration at Shocks
Caprioli, Damiano; Guo, Fan
2015-07-16
Collisionless shocks are mediated by collective electromagnetic interactions and are sources of non-thermal particles and emission. The full particle-in-cell approach and a hybrid approach are sketched, simulations of collisionless shocks are shown using a multicolor presentation. Results for SN 1006, a case involving ion acceleration and B field amplification where the shock is parallel, are shown. Electron acceleration takes place in planetary bow shocks and galaxy clusters. It is concluded that acceleration at shocks can be efficient: >15%; CRs amplify B field via streaming instability; ion DSA is efficient at parallel, strong shocks; ions are injected via reflection and shock drift acceleration; and electron DSA is efficient at oblique shocks.
NUMERICAL SIMULATIONS OF SPICULE ACCELERATION
Guerreiro, N.; Carlsson, M.; Hansteen, V. E-mail: mats.carlsson@astro.uio.no
2013-04-01
Observations in the H{alpha} line of hydrogen and the H and K lines of singly ionized calcium on the solar limb reveal the existence of structures with jet-like behavior, usually designated as spicules. The driving mechanism for such structures remains poorly understood. Sterling et al. shed some light on the problem mimicking reconnection events in the chromosphere with a one-dimensional code by injecting energy with different spatial and temporal distributions and tracing the thermodynamic evolution of the upper chromospheric plasma. They found three different classes of jets resulting from these injections. We follow their approach but improve the physical description by including non-LTE cooling in strong spectral lines and non-equilibrium hydrogen ionization. Increased cooling and conversion of injected energy into hydrogen ionization energy instead of thermal energy both lead to weaker jets and smaller final extent of the spicules compared with Sterling et al. In our simulations we find different behavior depending on the timescale for hydrogen ionization/recombination. Radiation-driven ionization fronts also form.
Accelerated Aging of the M119 Simulator
NASA Technical Reports Server (NTRS)
Bixon, Eric R.
2000-01-01
This paper addresses the storage requirement, shelf life, and the reliability of M119 Whistling Simulator. Experimental conditions have been determined and the data analysis has been completed for the accelerated testing of the system. A general methodology to evaluate the shelf life of the system as a function of the storage time, temperature, and relative humidity is discussed.
Accelerated simulation methods for plasma kinetics
NASA Astrophysics Data System (ADS)
Caflisch, Russel
2016-11-01
Collisional kinetics is a multiscale phenomenon due to the disparity between the continuum (fluid) and the collisional (particle) length scales. This paper describes a class of simulation methods for gases and plasmas, and acceleration techniques for improving their speed and accuracy. Starting from the Landau-Fokker-Planck equation for plasmas, the focus will be on a binary collision model that is solved using a Direct Simulation Monte Carlo (DSMC) method. Acceleration of this method is achieved by coupling the particle method to a continuum fluid description. The velocity distribution function f is represented as a combination of a Maxwellian M (the thermal component) and a set of discrete particles fp (the kinetic component). For systems that are close to (local) equilibrium, this reduces the number N of simulated particles that are required to represent f for a given level of accuracy. We present two methods for exploiting this representation. In the first method, equilibration of particles in fp, as well as disequilibration of particles from M, due to the collision process, is represented by a thermalization/dethermalization step that employs an entropy criterion. Efficiency of the representation is greatly increased by inclusion of particles with negative weights. This significantly complicates the simulation, but the second method is a tractable approach for negatively weighted particles. The accelerated simulation method is compared with standard PIC-DSMC method for both spatially homogeneous problems such as a bump-on-tail and inhomogeneous problems such as nonlinear Landau damping.
An exact accelerated stochastic simulation algorithm.
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-14
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.
Quench simulation program for superconducting accelerator magnets
Seog-Whan Kim
2001-08-10
In the design of superconducting magnets for accelerator and the quench protection systems, it is necessary to calculate the current, voltage and temperature during quench. The quench integral value (MIITs) is used to get a rough idea about the quench, but they need numerical calculations to obtain more detailed picture of the quench. A simulation program named KUENCH, which is not based on the MIITs calculation, was developed to calculate voltage, current and temperature of accelerator magnets during quenches. The software and calculation examples are introduced. The example also gives some important information about effects of copper content in the coil and quench protection heaters.
Simulations for Plasma and Laser Acceleration
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Lehe, Rémi
Computer simulations have had a profound impact on the design and understanding of past and present plasma acceleration experiments, and will be a key component for turning plasma accelerators from a promising technology into a mainstream scientific tool. In this article, we present an overview of the numerical techniques used with the most popular approaches to model plasma-based accelerators: electromagnetic particle-in-cell, quasistatic and ponderomotive guiding center. The material that is presented is intended to serve as an introduction to the basics of those approaches, and to advances (some of them very recent) that have pushed the state of the art, such as the optimal Lorentz-boosted frame, advanced laser envelope solvers and the elimination of numerical Cherenkov instability. The particle-in-cell method, which has broader interest and is more standardized, is presented in more depth. Additional topics that are cross-cutting, such as azimuthal Fourier decomposition or filtering, are also discussed, as well as potential challenges and remedies in the initialization of simulations and output of data. Examples of simulations using the techniques that are presented have been left out of this article for conciseness, and because simulation results are best understood when presented together, and contrasted with theoretical and/or experimental results, as in other articles of this volume.
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Neoadjuvant paradigm for accelerated drug development: an ideal model in bladder cancer.
Chism, David D; Woods, Michael E; Milowsky, Matthew I
2013-01-01
Neoadjuvant cisplatin-based combination chemotherapy for muscle-invasive bladder cancer (MIBC) has been shown to confer a survival advantage in two randomized clinical trials and a meta-analysis. Despite level 1 evidence supporting its benefit, utilization remains dismal with nearly one-half of patients ineligible for cisplatin-based therapy because of renal dysfunction, impaired performance status, and/or coexisting medical problems. This situation highlights the need for the development of novel therapies for the management of MIBC, a disease with a lethal phenotype. The neoadjuvant paradigm in bladder cancer offers many advantages for accelerated drug development. First, there is a greater likelihood of successful therapy at an earlier disease state that may be characterized by less genomic instability compared with the metastatic setting, with an early readout of activity with results determined in months rather than years. Second, pre- and post-treatment tumor tissue collection in patients with MIBC is performed as the standard of care without the need for research-directed biopsies, allowing for the ability to perform important correlative studies and to monitor tumor response to therapy in "real time." Third, pathological complete response (pT0) predicts for improved outcome in patients with MIBC. Fourth, there is a strong biological rationale with rapidly accumulating evidence for actionable targets in bladder cancer. This review focuses on the neoadjuvant paradigm for accelerated drug development using bladder cancer as the ideal model.
Accelerator simulation activities at the SSCL
Bourianoff, G.
1992-11-01
This paper will attempt to summarize the activities related to accelerator simulation at the SSC laboratory during the recent past. Operational simulations including injection, extraction, and correction, performance prediction of a specified lattice design, in particular, the effect of higher-order multipoles on linear aperture and the effect of power supply ripple on emittance growth in the collider, and lastly, the development and application of advanced techniques to particle tracking, e.g., parallel processing and mapping techniques will be discussed in this paper.
Numerical and laboratory simulations of auroral acceleration
Gunell, H.; De Keyser, J.; Mann, I.
2013-10-15
The existence of parallel electric fields is an essential ingredient of auroral physics, leading to the acceleration of particles that give rise to the auroral displays. An auroral flux tube is modelled using electrostatic Vlasov simulations, and the results are compared to simulations of a proposed laboratory device that is meant for studies of the plasma physical processes that occur on auroral field lines. The hot magnetospheric plasma is represented by a gas discharge plasma source in the laboratory device, and the cold plasma mimicking the ionospheric plasma is generated by a Q-machine source. In both systems, double layers form with plasma density gradients concentrated on their high potential sides. The systems differ regarding the properties of ion acoustic waves that are heavily damped in the magnetosphere, where the ion population is hot, but weakly damped in the laboratory, where the discharge ions are cold. Ion waves are excited by the ion beam that is created by acceleration in the double layer in both systems. The efficiency of this beam-plasma interaction depends on the acceleration voltage. For voltages where the interaction is less efficient, the laboratory experiment is more space-like.
DIFFUSIVE SHOCK ACCELERATION SIMULATIONS OF RADIO RELICS
Kang, Hyesung; Ryu, Dongsu; Jones, T. W. E-mail: ryu@canopus.cnu.ac.kr
2012-09-01
Recent radio observations have identified a class of structures, so-called radio relics, in clusters of galaxies. The radio emission from these sources is interpreted as synchrotron radiation from GeV electrons gyrating in {mu}G-level magnetic fields. Radio relics, located mostly in the outskirts of clusters, seem to associate with shock waves, especially those developed during mergers. In fact, they seem to be good structures to identify and probe such shocks in intracluster media (ICMs), provided we understand the electron acceleration and re-acceleration at those shocks. In this paper, we describe time-dependent simulations for diffusive shock acceleration at weak shocks that are expected to be found in ICMs. Freshly injected as well as pre-existing populations of cosmic-ray (CR) electrons are considered, and energy losses via synchrotron and inverse Compton are included. We then compare the synchrotron flux and spectral distributions estimated from the simulations with those in two well-observed radio relics in CIZA J2242.8+5301 and ZwCl0008.8+5215. Considering that CR electron injection is expected to be rather inefficient at weak shocks with Mach number M {approx}< a few, the existence of radio relics could indicate the pre-existing population of low-energy CR electrons in ICMs. The implication of our results on the merger shock scenario of radio relics is discussed.
An exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.
An exact accelerated stochastic simulation algorithm
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-01-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432
Toward GPGPU accelerated human electromechanical cardiac simulations
Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David
2014-01-01
In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
Particle Simulations of a Linear Dielectric Wall Proton Accelerator
Poole, B R; Blackfield, D T; Nelson, S D
2007-06-12
The dielectric wall accelerator (DWA) is a compact induction accelerator structure that incorporates the accelerating mechanism, pulse forming structure, and switch structure into an integrated module. The DWA consists of stacked stripline Blumlein assemblies, which can provide accelerating gradients in excess of 100 MeV/meter. Blumleins are switched sequentially according to a prescribed acceleration schedule to maintain synchronism with the proton bunch as it accelerates. A finite difference time domain code (FDTD) is used to determine the applied acceleration field to the proton bunch. Particle simulations are used to model the injector as well as the accelerator stack to determine the proton bunch energy distribution, both longitudinal and transverse dynamic focusing, and emittance growth associated with various DWA configurations.
Numerical simulation of an accelerator injector
Boyd, J.K.; Caporaso, G.J.; Cole, A.G.
1985-05-09
Accelerator injector designs have been evaluated using two computer codes. The first code self consistently follows relativistic particles in two dimensions. Fields are obtained in the Darwin model which includes inductive effects. This code is used to study cathode emission and acceleration to full injector voltage. The second code transports a fixed segment of a beam along the remainder of the beam line. Using these two codes the effects of electrode configuration on emittance, beam quality and beam transport have been studied.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for
Simulation of a medical linear accelerator for teaching purposes.
Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco
2015-05-08
Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.
A linear accelerator for simulated micrometeors.
NASA Technical Reports Server (NTRS)
Slattery, J. C.; Becker, D. G.; Hamermesh, B.; Roy, N. L.
1973-01-01
Review of the theory, design parameters, and construction details of a linear accelerator designed to impart meteoric velocities to charged microparticles in the 1- to 10-micron diameter range. The described linac is of the Sloan Lawrence type and, in a significant departure from conventional accelerator practice, is adapted to single particle operation by employing a square wave driving voltage with the frequency automatically adjusted from 12.5 to 125 kHz according to the variable velocity of each injected particle. Any output velocity up to about 30 km/sec can easily be selected, with a repetition rate of approximately two particles per minute.
Accelerated growth of calcium silicate hydrates: Experiments and simulations
Nicoleau, Luc
2011-12-15
Despite the usefulness of isothermal calorimetry in cement analytics, without any further computations this brings only little information on the nucleation and growth of hydrates. A model originally developed by Garrault et al. is used in this study in order to simulate hydration curves of cement obtained by calorimetry with different known hardening accelerators. The limited basis set of parameters used in this model, having a physical or chemical significance, is valuable for a better understanding of mechanisms underlying in the acceleration of C-S-H precipitation. Alite hydration in presence of four different types of hardening accelerators was investigated. It is evidenced that each accelerator type plays a specific role on one or several growth parameters and that the model may support the development of new accelerators. Those simulations supported by experimental observations enable us to follow the formation of the C-S-H layer around grains and to extract interesting information on its apparent permeability.
ELECTROMAGNETIC SIMULATIONS OF DIELECTRIC WALL ACCELERATOR STRUCTURES FOR ELECTRON BEAM ACCELERATION
Nelson, S D; Poole, B R
2005-05-05
Dielectric Wall Accelerator (DWA) technology incorporates the energy storage mechanism, the switching mechanism, and the acceleration mechanism for electron beams. Electromagnetic simulations of DWA structures includes these effects and also details of the switch configuration and how that switch time affects the electric field pulse which accelerates the particle beam. DWA structures include both bi-linear and bi-spiral configurations with field gradients on the order of 20MV/m and the simulations include the effects of the beampipe, the beampipe walls, the DWA High Gradient Insulator (HGI) insulating stack, wakefield impedance calculations, and test particle trajectories with low emittance gain. Design trade-offs include the transmission line impedance (typically a few ohms), equilibration ring optimization, driving switch inductances, and layer-to-layer coupling effects and the associated affect on the acceleration pulse's peak value.
Simulation of electron post-acceleration in a two-stage laser Wakefield accelerator
Reitsma, A.J.W.; Leemans, W.P.; Esarey, E.; Kamp, L.P.J.; Schep, T.J.
2002-04-01
Electron bunches produced in self-modulated laser wakefield experiments usually have a broad energy spectrum, with most electrons at low energy (1-3 MeV) and only a small fraction at high energy. We propose and investigate further acceleration of such bunches in a channel-guided resonant laser wakefield accelerator. Two-dimensional simulations with and without the effects of self-consistent beam loading are performed and compared. These results indicate that it is possible to trap about 40 percent of the injected bunch charge and accelerate this fraction to an average energy of about 50 MeV in a plasma channel of a few mn.
Simulations of collisionless shocks - Some implications for particle acceleration
NASA Astrophysics Data System (ADS)
Burgess, D.
1992-08-01
The role of self-consistent plasma simulations is discussed with reference to collisionless shock structure and the extraction of thermal particles to supra-thermal energies. Examples are given from quasi-perpendicular and parallel shock geometries. The cyclic reformation behavior of the quasi-parallel shock, as revealed by simulations, is detailed, and some implications given. Finally, some recent advances are described in the techniques of simulation of strong particle acceleration.
Start-to-end simulation with rare isotope beam for post accelerator of the RAON accelerator
NASA Astrophysics Data System (ADS)
Jin, Hyunchang; Jang, Ji-Ho
2016-09-01
The RAON accelerator for the Rare Isotope Science Project (RISP) has been developed to create and accelerate various kinds of stable heavy ion beams and rare isotope beams for a wide range of science applications. In the RAON accelerator, the rare isotope beams generated by the Isotope Separation On-Line (ISOL) system will be transported through the post accelerator, namely, from the post Low Energy Beam Transport (LEBT) system and the post Radio Frequency Quadrupole (RFQ) to the superconducting linac (SCL3). The accelerated beams will be put to use in the low energy experimental hall or accelerated again by the superconducting linac (SCL2) in order to be used in the high energy experimental hall. In this paper, we will describe the results of the start-toend simulations with the rare isotope beams generated by the ISOL system in the post accelerator of the RAON accelerator. In addition, the error analysis and correction at the superconducting linac SCL3 will be presented.
Accelerating ab initio molecular dynamics simulations by linear prediction methods
NASA Astrophysics Data System (ADS)
Herr, Jonathan D.; Steele, Ryan P.
2016-09-01
Acceleration of ab initio molecular dynamics (AIMD) simulations can be reliably achieved by extrapolation of electronic data from previous timesteps. Existing techniques utilize polynomial least-squares regression to fit previous steps' Fock or density matrix elements. In this work, the recursive Burg 'linear prediction' technique is shown to be a viable alternative to polynomial regression, and the extrapolation-predicted Fock matrix elements were three orders of magnitude closer to converged elements. Accelerations of 1.8-3.4× were observed in test systems, and in all cases, linear prediction outperformed polynomial extrapolation. Importantly, these accelerations were achieved without reducing the MD integration timestep.
Lunar Dust Simulant in Mechanical Component Testing - Paradigm and Practicality
NASA Technical Reports Server (NTRS)
Jett, T.; Street, K.; Abel, P.; Richmond, R.
2008-01-01
Due to the uniquely harsh lunar surface environment, terrestrial test activities may not adequately represent abrasive wear by lunar dust likely to be experienced in mechanical systems used in lunar exploration. Testing to identify potential moving mechanism problems has recently begun within the NASA Engineering and Safety Center Mechanical Systems Lunar Dust Assessment activity in coordination with the Exploration Technology and Development Program Dust Management Project, and these complimentary efforts will be described. Specific concerns about differences between simulant and lunar dust, and procedures for mechanical component testing with lunar simulant will be considered. In preparing for long term operations within a dusty lunar environment, the three fundamental approaches to keeping mechanical equipment functioning are dust avoidance, dust removal, and dust tolerance, with some combination of the three likely to be found in most engineering designs. Methods to exclude dust from contact with mechanical components would constitute mitigation by dust avoidance, so testing seals for dust exclusion efficacy as a function of particle size provides useful information for mechanism design. Dust of particle size less than a micron is not well documented for impact on lunar mechanical components. Therefore, creating a standardized lunar dust simulant in the particulate size range of ca. 0.1 to 1.0 micrometer is useful for testing effects on mechanical components such as bearings, gears, seals, bushings, and other moving mechanical assemblies. Approaching actual wear testing of mechanical components, it is beneficial to first establish relative wear rates caused by dust on commonly used mechanical component materials. The wear mode due to dust within mechanical components, such as abrasion caused by dust in grease(s), needs to be considered, as well as the effects of vacuum, lunar thermal cycle, and electrostatics on wear rate.
Accelerating Subsurface Transport Simulation on Heterogeneous Clusters
Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino
2013-09-23
Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy, LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272
Electron Acceleration in Shock-Shock Interaction: Simulations and Observations
NASA Astrophysics Data System (ADS)
Nakanotani, M.; Matsukiyo, S.; Mazelle, C. X.; Hada, T.
2015-12-01
Collisionless shock waves play a crucial role in producing high energy particles (cosmic rays) in space. While most of the past studies about particle acceleration assume the presence of a single shock, in space two shocks frequently come close to or even collide with each other. Hietala et al. [2011] observed the collision of an interplanetary shock and the earth's bow shock and the associated acceleration of energetic ions. The kinetic natures of a shock-shock collision has not been well understood. Only the work done by using hybrid simulation was reported by Cargill et al. [1986], in which they focus on a collision of two supercritical shocks and the resultant ion acceleration. We expect similarly that electron acceleration can also occur in shock-shock collision. To investigate the electron acceleration process in a shock-shock collision, we perform one-dimensional full particle-in-cell (PIC) simulations. In the simulation energetic electrons are observed between the two approaching shocks before colliding. These energetic electrons are efficiently accelerated through multiple reflections at the two shocks (Fermi acceleration). The reflected electrons create a temperature anisotropy and excite large amplitude waves upstream via the electron fire hose instability. The large amplitude waves can scatter the energetic electrons in pitch angle so that some of them gain large pitch angles and are easily reflected when they encounter the shocks subsequently. The reflected electrons can sustain, or probably even strengthen, them. We further discuss observational results of an interaction of interplanetary shocks and the earth's bow shock by examining mainly Cluster data. We focus on whether or not electrons are accelerated in the shock-shock interaction.
Ritanserin facilitates anxiety in a simulated public-speaking paradigm.
Guimarães, F S; Mbaya, P S; Deakin, J F
1997-01-01
The effects of ritanserin, a 5-HT2A/2C (5-hydroxytryptamine) antagonist, have been investigated in simulated public speaking with healthy volunteers. The aim was to investigate the role of 5-HT in subjective experimental anxiety. There were three experimental groups each comprising four or five males and 11 females. Subjects received placebo, ritanserin 2.5 or 10 mg, p.o. They rated themselves using the Spielberger State-Trait Anxiety Inventory and visual analogue scales factored into anxiety, sedation and discontentment scores. Autonomic measures included skin conductance and heart rate. Subjects were told, 75 min after drug or placebo ingestion, without prior warning, to prepare a 4-min speech. Measures were taken before, during and after the speech. Ritanserin prolonged the anxiety induced by the procedure on the subjective ratings but had minimal effect on autonomic responses to the procedure. The result contrasts with an anxiolytic-like effect of ritanserin on aversively conditioned autonomic responses. The present finding is compatible with animal behavioural evidence that 5-HT has distinct and opposing roles in modulating conditioned and unconditioned anxiety.
Scaled simulations of a 10 GeV accelerator
Cormier-Michel, Estelle; Geddes, C.G.R; Esarey, E.; Schroeder, C.B.; Bruhwiler, D.L.; Paul, K.; Cowan, B.; Leemans, W.P.
2008-09-08
Laser plasma accelerators are able to produce high quality electron beams from 1 MeV to 1 GeV. The next generation of plasma accelerator experiments will likely use a multi-stage approach where a high quality electron bunch is first produced and then injected into an accelerating structure. In this paper we present scaled particle-in-cell simulations of a 10 GeV stage in the quasi-linear regime. We show that physical parameters can be scaled to be able to perform these simulations at reasonable computational cost. Beam loading properties and electron bunch energy gain are calculated. A range of parameter regimes are studied to optimize the quality of the electron bunch at the output of the stage.
Scaled simulations of a 10 GeV accelerator
Cormier-Michel, Estelle; Geddes, C. G. R.; Schroeder, C. B.; Esarey, E.; Leemans, W. P.; Bruhwiler, D. L.; Paul, K.; Cowan, B.
2009-01-22
Laser plasma accelerators are able to produce high quality electron beams from 1 MeV to 1 GeV. The next generation of plasma accelerator experiments will likely use a multi-stage approach where a high quality electron bunch is first produced and then injected into an accelerating structure. In this paper we present scaled particle-in-cell simulations of a 10 GeV stage in the quasi-linear regime. We show that physical parameters can be scaled to be able to perform these simulations at reasonable computational cost. Beam loading properties and electron bunch energy gain are calculated. A range of parameter regimes are studied to optimize the quality of the electron bunch at the output of the stage.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Blind protein structure prediction using accelerated free-energy simulations
Perez, Alberto; Morrone, Joseph A.; Brini, Emiliano; MacCallum, Justin L.; Dill, Ken A.
2016-01-01
We report a key proof of principle of a new acceleration method [Modeling Employing Limited Data (MELD)] for predicting protein structures by molecular dynamics simulation. It shows that such Boltzmann-satisfying techniques are now sufficiently fast and accurate to predict native protein structures in a limited test within the Critical Assessment of Structure Prediction (CASP) community-wide blind competition. PMID:27847872
Transverse wake field simulations for the ILC acceleration structure
Solyak, N.; Lunin, A.; Yakovlev, V.; /Fermilab
2008-06-01
Details of wake potential simulation in the acceleration structure of ILC, including the RF cavities and input/HOM couplers are presented. Transverse wake potential dependence is described versus the bunch length. Beam emittance dilution caused by main and HOM couplers is estimated, followed by a discussion of possible structural modifications allowing a reduction of transverse wake potential.
Simulations of ion acceleration at non-relativistic shocks. I. Acceleration efficiency
Caprioli, D.; Spitkovsky, A.
2014-03-10
We use two-dimensional and three-dimensional hybrid (kinetic ions-fluid electrons) simulations to investigate particle acceleration and magnetic field amplification at non-relativistic astrophysical shocks. We show that diffusive shock acceleration operates for quasi-parallel configurations (i.e., when the background magnetic field is almost aligned with the shock normal) and, for large sonic and Alfvénic Mach numbers, produces universal power-law spectra ∝p {sup –4}, where p is the particle momentum. The maximum energy of accelerated ions increases with time, and it is only limited by finite box size and run time. Acceleration is mainly efficient for parallel and quasi-parallel strong shocks, where 10%-20% of the bulk kinetic energy can be converted to energetic particles and becomes ineffective for quasi-perpendicular shocks. Also, the generation of magnetic turbulence correlates with efficient ion acceleration and vanishes for quasi-perpendicular configurations. At very oblique shocks, ions can be accelerated via shock drift acceleration, but they only gain a factor of a few in momentum and their maximum energy does not increase with time. These findings are consistent with the degree of polarization and the morphology of the radio and X-ray synchrotron emission observed, for instance, in the remnant of SN 1006. We also discuss the transition from thermal to non-thermal particles in the ion spectrum (supra-thermal region) and we identify two dynamical signatures peculiar of efficient particle acceleration, namely, the formation of an upstream precursor and the alteration of standard shock jump conditions.
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Technical Reports Server (NTRS)
Campbell, Roy K.
1989-01-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better
Beam Dynamics Design and Simulation in Ion Linear Accelerators (
Ostroumov, Peter N.; Asseev, Vladislav N.; Mustapha, and Brahim
2006-08-01
Orginally, the ray tracing code TRACK has been developed to fulfill the many special requirements for the Rare Isotope Accelerator Facility known as RIA. Since no available beam-dynamics code met all the necessary requirements, modifications to the code TRACK were introduced to allow end-to-end (from the ion souce to the production target) simulations of the RIA machine, TRACK is a general beam-dynamics code and can be applied for the design, commissioning and operation of modern ion linear accelerators and beam transport systems.
Introducing a new paradigm for accelerators and large experimental apparatus control systems
NASA Astrophysics Data System (ADS)
Catani, L.; Zani, F.; Bisegni, C.; Di Pirro, G.; Foggetta, L.; Mazzitelli, G.; Stecchi, A.
2012-11-01
The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing distributed control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-ons and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. Despite this still narrow spectrum of employment, some software technologies developed for high-performance web services, although originally intended and optimized for these particular applications, deserve some features suggesting a deeper integration in a control system and, eventually, their use to develop some of the control system’s core components. In this paper, we present the conceptual design of a new control system for a particle accelerator and associated machine data acquisition system, based on a synergic combination of a nonrelational key/value database and network distributed object caching. The use of these technologies, to implement respectively continuous data archiving and data distribution between components, brought about the definition of a new control system concept offering a number of interesting features such as a high level of abstraction of services and components and their integration in a framework that can be seen as a comprehensive service provider that both graphical user interface applications and front-end controllers join for accessing and, to some extent, expanding its functionalities.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Egorov, I.
2014-06-15
This paper describes the development of a computation model of a pulsed voltage generator for a repetitive electron accelerator. The model is based on a principle circuit of the generator, supplemented with the parasitics elements of the construction. Verification of the principle model was achieved by comparison of simulation with experimental results, where reasonable agreement was demonstrated for a wide range of generator load resistance.
Monte Carlo simulations of particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.
1994-01-01
The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.
New Developments in the Simulation of Advanced Accelerator Concepts
Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc
2009-01-22
Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.
The Particle Accelerator Simulation Code PyORBIT
Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M; Shishlo, Andrei P
2015-01-01
The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.
Simulating An Acceleration Schedule For NDCX-II
Sharp, W M; Friedman, A; Grote, D P; Henestroza, E; Leitner, M A; Waldron, W L
2009-05-18
The Virtual National Laboratory for Heavy-Ion Fusion Science is developing a physics design for NDCX-II, an experiment to study warm dense matter heated by ions. Present plans call for using 34 induction cells to accelerate 45 nC of Li{sup +} ions to more than 3 MeV, followed by neutralized drift-compression. To heat targets to the desired temperatures, the beam must be compressed to a millimeter-scale radius and a duration of about 1 ns. A novel NDCX-II acceleration schedule has been developed using an interactive one-dimensional particle-in-cell simulation ASP to model the longitudinal physics and axisymmetric WARP simulations to validate the 1-D model and add transverse focusing. Three-dimensional Warp runs have been used recently to study the sensitivity to misalignments in the focusing solenoids.
SIMULATING AN ACCELERATION SCHEDULE FOR NDCX-II
Sharp, W.M.; Friedman, A.; Grote, D.P.; Henestroza, E.; Leitner, M.A.; Waldron, W.L.
2009-05-01
The Virtual National Laboratory for Heavy-Ion Fusion Science is developing a physics design for NDCX-II, an experiment to study warm dense matter heated by ions. Present plans call for using 34 induction cells to accelerate 45 nC of Li+ ions to more than 3 MeV, followed by neutralized drift-compression. To heat targets to the desired temperatures, the beam must be compressed to a millimeter-scale radius and a duration of about 1 ns. A novel NDCX-II acceleration schedule has been developed using an interactive one-dimensional particle-in-cell simulation ASP to model the longitudinal physics and axisymmetric WARP simulations to validate the 1-D model and add transverse focusing. Three-dimensional Warp runs have been used recently to study the sensitivity to misalignments in the focusing solenoids.
New Developments in the Simulation of Advanced Accelerator Concepts
Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.
2008-09-10
Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.
Community Petascale Project for Accelerator Science and Simulation
Warren B. Mori
2013-02-01
The UCLA Plasma Simulation Group is a major partner of the "Community Petascale Project for Accelerator Science and Simulation. This is the final technical report. We include an overall summary, a list of publications and individual progress reports for each years. During the past five years we have made tremendous progress in enhancing the capabilities of OSIRIS and QuickPIC, in developing new algorithms and data structures for PIC codes to run on GPUS and many future core architectures, and in using these codes to model experiments and in making new scientific discoveries. Here we summarize some highlights for which SciDAC was a major contributor.
Design of Accelerator Online Simulator Server Using Structured Data
Shen, Guobao; Chu, Chungming; Wu, Juhao; Kraimer, Martin; /Argonne
2012-07-06
Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describes the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.
Simulating synchrotron radiation in accelerators including diffuse and specular reflections
NASA Astrophysics Data System (ADS)
Dugan, G.; Sagan, D.
2017-02-01
An accurate calculation of the synchrotron radiation flux within the vacuum chamber of an accelerator is needed for a number of applications. These include simulations of electron cloud effects and the design of radiation masking systems. To properly simulate the synchrotron radiation, it is important to include the scattering of the radiation at the vacuum chamber walls. To this end, a program called synrad3d has been developed which simulates the production and propagation of synchrotron radiation using a collection of photons. Photons generated by a charged particle beam are tracked from birth until they strike the vacuum chamber wall where the photon is either absorbed or scattered. Both specular and diffuse scattering is simulated. If a photon is scattered, it is further tracked through multiple encounters with the wall until it is finally absorbed. This paper describes the synrad3d program, with a focus on the details of its scattering model, and presents some examples of the program's use.
Accelerating particle-in-cell simulations using multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2015-11-01
Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.
A universal postprocessing toolkit for accelerator simulation and data analysis.
Borland, M.
1998-12-16
The Self-Describing Data Sets (SDDS) toolkit comprises about 70 generally-applicable programs sharing a common data protocol. At the Advanced Photon Source (APS), SDDS performs the vast majority of operational data collection and processing, most data display functions, and many control functions. In addition, a number of accelerator simulation codes use SDDS for all post-processing and data display. This has three principle advantages: first, simulation codes need not provide customized post-processing tools, thus simplifying development and maintenance. Second, users can enhance code capabilities without changing the code itself, by adding SDDS-based pre- and post-processing. Third, multiple codes can be used together more easily, by employing SDDS for data transfer and adaptation. Given its broad applicability, the SDDS file protocol is surprisingly simple, making it quite easy for simulations to generate SDDS-compliant data. This paper discusses the philosophy behind SDDS, contrasting it with some recent trends, and outlines the capabilities of the toolkit. The paper also gives examples of using SDDS for accelerator simulation.
Simulating Electron Clouds in Heavy-Ion Accelerators
Cohen, R.H.; Friedman, A.; Kireeff Covo, M.; Lund, S.M.; Molvik,A.W.; Bieniosek, F.M.; Seidl, P.A.; Vay, J-L.; Stoltz, P.; Veitzer, S.
2005-04-07
Contaminating clouds of electrons are a concern for most accelerators of positive-charged particles, but there are some unique aspects of heavy-ion accelerators for fusion and high-energy density physics which make modeling such clouds especially challenging. In particular, self-consistent electron and ion simulation is required, including a particle advance scheme which can follow electrons in regions where electrons are strongly-, weakly-, and un-magnetized. They describe their approach to such self-consistency, and in particular a scheme for interpolating between full-orbit (Boris) and drift-kinetic particle pushes that enables electron time steps long compared to the typical gyro period in the magnets. They present tests and applications: simulation of electron clouds produced by three different kinds of sources indicates the sensitivity of the cloud shape to the nature of the source; first-of-a-kind self-consistent simulation of electron-cloud experiments on the High-Current Experiment (HCX) at Lawrence Berkeley National Laboratory, in which the machine can be flooded with electrons released by impact of the ion beam and an end plate, demonstrate the ability to reproduce key features of the ion-beam phase space; and simulation of a two-stream instability of thin beams in a magnetic field demonstrates the ability of the large-timestep mover to accurately calculate the instability.
Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method
NASA Astrophysics Data System (ADS)
Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han
2015-12-01
Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.
A GPU Accelerated Simulation Program for Electron Cooling Process
NASA Astrophysics Data System (ADS)
Zhang, He; Huang, He; Li, Rui; Chen, Jie; Luo, Li-Shi
2015-04-01
Electron cooling is essential to achieve high luminosity in the medium energy electron ion collider (MIEC) project at Jefferson Lab. Bunched electron beam with energy above 50 MeV is used to cool coasting and/or bunched ion beams. Although the conventional electron cooling technique has been widely used, such an implementation in MEIC is still challenging. We are developing a simulation program for the electron cooling process to fulfill the need of the electron cooling system design for MEIC. The program simulates the evolution of the ion beam under the intrabeam scattering (IBS) effect and the electron cooling effect using Monte Carlo method. To accelerate the calculation, the program is developed on a GPU platform. We will present some preliminary simulation results. Work supported by the Department of Energy, Laboratory Directed Research and Development Funding, under Contract No. DE-AC05-06OR23177.
Enhancing Protein Adsorption Simulations by Using Accelerated Molecular Dynamics
Mücksch, Christian; Urbassek, Herbert M.
2013-01-01
The atomistic modeling of protein adsorption on surfaces is hampered by the different time scales of the simulation ( s) and experiment (up to hours), and the accordingly different ‘final’ adsorption conformations. We provide evidence that the method of accelerated molecular dynamics is an efficient tool to obtain equilibrated adsorption states. As a model system we study the adsorption of the protein BMP-2 on graphite in an explicit salt water environment. We demonstrate that due to the considerably improved sampling of conformational space, accelerated molecular dynamics allows to observe the complete unfolding and spreading of the protein on the hydrophobic graphite surface. This result is in agreement with the general finding of protein denaturation upon contact with hydrophobic surfaces. PMID:23755156
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in
Accelerating Climate and Weather Simulations through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark
2011-01-01
Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.
PIC simulations on the termination shock: Microstructure and electron acceleration
NASA Astrophysics Data System (ADS)
Matsukiyo, S.; Scholer, M.
2013-05-01
The ability of the termination shock as a particle accelerator is totally unknown. Voyager data and recent kinetic numerical simulations revealed that the compression ratio of the termination shock is rather low due to the presence of pickup ions, i.e., the termination shock appears to be a weak shock. Nevertheless, two Voyager spacecraft observed not only high energy ions called termination shock particles, which are non-thermal but less energetic compared to the so-called anomalous cosmic rays, but also high energy electrons. In this study we focus especially on microstructure of the termination shock and the associated electron acceleration process by performing one-dimensional full particle-in-cell (PIC) simulations for a variety of parameters. For typical solar wind parameters at the termination shock, a shock potential has no sharp ramp with the spatial scale of the order of electron inertial length which is suitable for the injection of anomalous cosmic ray acceleration. Solar wind ions are not so much heated, which is consistent with Voyager spacecraft data. If a shock angle is close to 90 deg., a shock is almost time stationary or weakly breathing when a relative pickup ion density is 30%, while it becomes non-stationary if the relative pickup ion density is 20%. When the shock angle becomes oblique, a self-reformation occurs due to the interaction of solar wind ions and whistler precursors. Here, the shock angle is defined as the angle between upstream magnetic field and shock normal. For the case with relatively low beta solar wind plasma (electron beta is 0.1 and solar wind ion temperature equals to electron temperature), modified two-stream instability (MTSI) gets excited in the extended foot sustained by reflected pickup ions, and both solar wind electrons and ions are heated. If the solar wind plasma temperature gets five times higher, on the other hand, the MTSI is weakened and the pre-heating of the solar wind plasma in the extended foot is
Simulation of PEP-II Accelerator Backgrounds Using TURTLE
Barlow, R.J.; Fieguth, T.; Kozanecki, W.; Majewski, S.A.; Roudeau, P.; Stocchi, A.; /Orsay, LAL
2006-02-15
We present studies of accelerator-induced backgrounds in the BaBar detector at the SLAC B-Factory, carried out using LPTURTLE, a modified version of the DECAY TURTLE simulation package. Lost-particle backgrounds in PEP-II are dominated by a combination of beam-gas bremstrahlung, beam-gas Coulomb scattering, radiative-Bhabha events and beam-beam blow-up. The radiation damage and detector occupancy caused by the associated electromagnetic shower debris can limit the usable luminosity. In order to understand and mitigate such backgrounds, we have performed a full program of beam-gas and luminosity-background simulations, that include the effects of the detector solenoidal field, detailed modeling of limiting apertures in both collider rings, and optimization of the betatron collimation scheme in the presence of large transverse tails.
Wakefield Simulations for the Laser Acceleration Experiment at SLAC
Ng, Johnny
2012-04-18
Laser-driven acceleration in dielectric photonic band gap structures can provide gradients on the order of GeV/m. The small transverse dimension of the structure, on the order of the laser wavelength, presents interesting wakefield-related issues. Higher order modes can seriously degrade beam quality, and a detailed understanding is needed to mitigate such effects. On the other hand, wakefields also provide a direct way to probe the interaction of a relativistic bunch with the synchronous modes supported by the structure. Simulation studies have been carried out as part of the effort to understand the impact on beam dynamics, and to compare with data from beam experiments designed to characterize candidate structures. In this paper, we present simulation results of wakefields excited by a sub-wavelength bunch in optical photonic band gap structures.
GPU Accelerated Numerical Simulation of Viscous Flow Down a Slope
NASA Astrophysics Data System (ADS)
Gygax, Remo; Räss, Ludovic; Omlin, Samuel; Podladchikov, Yuri; Jaboyedoff, Michel
2014-05-01
Numerical simulations are an effective tool in natural risk analysis. They are useful to determine the propagation and the runout distance of gravity driven movements such as debris flows or landslides. To evaluate these processes an approach on analogue laboratory experiments and a GPU accelerated numerical simulation of the flow of a viscous liquid down an inclined slope is considered. The physical processes underlying large gravity driven flows share certain aspects with the propagation of debris mass in a rockslide and the spreading of water waves. Several studies have shown that the numerical implementation of the physical processes of viscous flow produce a good fit with the observation of experiments in laboratory in both a quantitative and a qualitative way. When considering a process that is this far explored we can concentrate on its numerical transcription and the application of the code in a GPU accelerated environment to obtain a 3D simulation. The objective of providing a numerical solution in high resolution by NVIDIA-CUDA GPU parallel processing is to increase the speed of the simulation and the accuracy on the prediction. The main goal is to write an easily adaptable and as short as possible code on the widely used platform MATLAB, which will be translated to C-CUDA to achieve higher resolution and processing speed while running on a NVIDIA graphics card cluster. The numerical model, based on the finite difference scheme, is compared to analogue laboratory experiments. This way our numerical model parameters are adjusted to reproduce the effective movements observed by high-speed camera acquisitions during the laboratory experiments.
Caroselli, Jerome Silvio; Hiscock, Merrill; Scheibel, Randall S; Ingram, Fred
2006-01-01
Simulated gambling tasks have become popular as sensitive tools for identifying individuals with real-time impairment in decision making. Various clinical samples, especially patients with damage to the ventromedial prefrontal cortex, perform poorly on these tasks. The patients typically persist in choosing risky (disadvantageous) card decks instead of switching to safer (advantageous) decks. In terms of Damasio's (1994) somatic marker hypothesis, the poor performance stems from defective integration of emotional and rational aspects of decision making. Less information is available about performance in healthy populations, particularly young adults. After administering a computerized gambling task to 141 university students, we found that individuals in this population also tend to prefer disadvantageous decks to advantageous decks. The results indicate that performance is governed primarily by the frequency of positive outcomes on a trial-by-trial basis rather than by the accumulation of winnings in the longer term. These findings are discussed in light of the cognitive literature pertaining to the simulated gambling paradigm.
A new paradigm for reproducing and analyzing N-body simulations of planetary systems
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2017-01-01
The reproducibility of experiments is one of the main principles of the scientific method. However, numerical N-body experiments, especially those of planetary systems, are currently not reproducible. In the most optimistic scenario, they can only be replicated in an approximate or statistical sense. Even if authors share their full source code and initial conditions, differences in compilers, libraries, operating systems or hardware often lead to qualitatively different results. We provide a new set of easy-to-use, open-source tools that address the above issues, allowing for exact (bit-by-bit) reproducibility of N-body experiments. In addition to generating completely reproducible integrations, we show that our framework also offers novel and innovative ways to analyze these simulations. As an example, we present a high-accuracy integration of the Solar System spanning 10 Gyrs, requiring several weeks to run on a modern CPU. In our framework we can not only easily access simulation data at predefined intervals for which we save snapshots, but at any time during the integration. We achieve this by integrating an on-demand reconstructed simulation forward in time from the nearest snapshot. This allows us to extract arbitrary quantities at any point in the saved simulation exactly (bit-by-bit), and within seconds rather than weeks. We believe that the tools we present in this paper offer a new paradigm for how N-body simulations are run, analyzed, and shared across the community.
Final Progress Report - Heavy Ion Accelerator Theory and Simulation
Haber, Irving
2009-10-31
The use of a beam of heavy ions to heat a target for the study of warm dense matter physics, high energy density physics, and ultimately to ignite an inertial fusion pellet, requires the achievement of beam intensities somewhat greater than have traditionally been obtained using conventional accelerator technology. The research program described here has substantially contributed to understanding the basic nonlinear intense-beam physics that is central to the attainment of the requisite intensities. Since it is very difficult to reverse intensity dilution, avoiding excessive dilution over the entire beam lifetime is necessary for achieving the required beam intensities on target. The central emphasis in this research has therefore been on understanding the nonlinear mechanisms that are responsible for intensity dilution and which generally occur when intense space-charge-dominated beams are not in detailed equilibrium with the external forces used to confine them. This is an important area of study because such lack of detailed equilibrium can be an unavoidable consequence of the beam manipulations such as acceleration, bunching, and focusing necessary to attain sufficient intensity on target. The primary tool employed in this effort has been the use of simulation, particularly the WARP code, in concert with experiment, to identify the nonlinear dynamical characteristics that are important in practical high intensity accelerators. This research has gradually made a transition from the study of idealized systems and comparisons with theory, to study the fundamental scaling of intensity dilution in intense beams, and more recently to explicit identification of the mechanisms relevant to actual experiments. This work consists of two categories; work in direct support beam physics directly applicable to NDCX and a larger effort to further the general understanding of space-charge-dominated beam physics.
Requirements for Simulating Space Radiation With Particle Accelerators
NASA Technical Reports Server (NTRS)
Schimmerling, W.; Wilson, J. W.; Cucinotta, F.; Kim, M-H Y.
2004-01-01
Interplanetary space radiation consists of fully ionized nuclei of atomic elements with high energy for which only the few lowest energy ions can be stopped in shielding materials. The health risk from exposure to these ions and their secondary radiations generated in the materials of spacecraft and planetary surface enclosures is a major limiting factor in the management of space radiation risk. Accurate risk prediction depends on a knowledge of basic radiobiological mechanisms and how they are modified in the living tissues of a whole organism. To a large extent, this knowledge is not currently available. It is best developed at ground-based laboratories, using particle accelerator beams to simulate the components of space radiation. Different particles, in different energy regions, are required to study different biological effects, including beams of argon and iron nuclei in the energy range 600 to several thousand MeV/nucleon and carbon beams in the energy range of approximately 100 MeV/nucleon to approximately 1000 MeV/nucleon. Three facilities, one each in the United States, in Germany and in Japan, currently have the partial capability to satisfy these constraints. A facility has been proposed using the Brookhaven National Laboratory Booster Synchrotron in the United States; in conjunction with other on-site accelerators, it will be able to provide the full range of heavy ion beams and energies required. International cooperation in the use of these facilities is essential to the development of a safe international space program.
Dark Current Simulation for Linear Collider Accelerator Structures
Ng, C.K.; Li, Z.; Zhan, X.; Srinivas, V.; Wang, J.; Ko, K.; /SLAC
2011-08-25
The dynamics of field-emitted electrons in the traveling wave fields of a constant gradient (tapered) disk-loaded waveguide is followed numerically. Previous simulations have been limited to constant impedance (uniform) structures for sake of simplicity since only the fields in a unit cell is needed. Using a finite element field solver on a parallel computer, the fields in the tapered structure can now be readily generated. We will obtain the characteristics of the dark current emitted from both structure types and compare the two results with and without the effect of secondary electrons. The NLC and JLC detuned structures are considered to study if dark current may pose a problem for high gradient acceleration in the next generation of Linear Colliders.
Accelerated prompt gamma estimation for clinical proton therapy simulations
NASA Astrophysics Data System (ADS)
Huisman, Brent F. B.; Létang, J. M.; Testa, É.; Sarrut, D.
2016-11-01
There is interest in the particle therapy community in using prompt gammas (PGs), a natural byproduct of particle treatment, for range verification and eventually dose control. However, PG production is a rare process and therefore estimation of PGs exiting a patient during a proton treatment plan executed by a Monte Carlo (MC) simulation converges slowly. Recently, different approaches to accelerating the estimation of PG yield have been presented. Sterpin et al (2015 Phys. Med. Biol. 60 4915-46) described a fast analytic method, which is still sensitive to heterogeneities. El Kanawati et al (2015 Phys. Med. Biol. 60 8067-86) described a variance reduction method (pgTLE) that accelerates the PG estimation by precomputing PG production probabilities as a function of energy and target materials, but has as a drawback that the proposed method is limited to analytical phantoms. We present a two-stage variance reduction method, named voxelized pgTLE (vpgTLE), that extends pgTLE to voxelized volumes. As a preliminary step, PG production probabilities are precomputed once and stored in a database. In stage 1, we simulate the interactions between the treatment plan and the patient CT with low statistic MC to obtain the spatial and spectral distribution of the PGs. As primary particles are propagated throughout the patient CT, the PG yields are computed in each voxel from the initial database, as a function of the current energy of the primary, the material in the voxel and the step length. The result is a voxelized image of PG yield, normalized to a single primary. The second stage uses this intermediate PG image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device, corresponding to the number of primaries desired. We achieved a gain of around 103 for both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2% relative
A Multi-Paradigm Modeling Framework to Simulate Dynamic Reciprocity in a Bioreactor
Kaul, Himanshu; Cui, Zhanfeng; Ventikos, Yiannis
2013-01-01
Despite numerous technology advances, bioreactors are still mostly utilized as functional black-boxes where trial and error eventually leads to the desirable cellular outcome. Investigators have applied various computational approaches to understand the impact the internal dynamics of such devices has on overall cell growth, but such models cannot provide a comprehensive perspective regarding the system dynamics, due to limitations inherent to the underlying approaches. In this study, a novel multi-paradigm modeling platform capable of simulating the dynamic bidirectional relationship between cells and their microenvironment is presented. Designing the modeling platform entailed combining and coupling fully an agent-based modeling platform with a transport phenomena computational modeling framework. To demonstrate capability, the platform was used to study the impact of bioreactor parameters on the overall cell population behavior and vice versa. In order to achieve this, virtual bioreactors were constructed and seeded. The virtual cells, guided by a set of rules involving the simulated mass transport inside the bioreactor, as well as cell-related probabilistic parameters, were capable of displaying an array of behaviors such as proliferation, migration, chemotaxis and apoptosis. In this way the platform was shown to capture not only the impact of bioreactor transport processes on cellular behavior but also the influence that cellular activity wields on that very same local mass transport, thereby influencing overall cell growth. The platform was validated by simulating cellular chemotaxis in a virtual direct visualization chamber and comparing the simulation with its experimental analogue. The results presented in this paper are in agreement with published models of similar flavor. The modeling platform can be used as a concept selection tool to optimize bioreactor design specifications. PMID:23555740
NASA Astrophysics Data System (ADS)
Riecken, Mark; Lessmann, Kurt; Schillero, David
2016-05-01
The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed
Saturn: A large area x-ray simulation accelerator
Bloomquist, D.D.; Stinnett, R.W.; McDaniel, D.H.; Lee, J.R.; Sharpe, A.W.; Halbleib, J.A.; Schlitt, L.G.; Spence, P.W.; Corcoran, P.
1987-01-01
Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area x-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology made by the ICF program in recent years and much of the existing PBFA-I support system. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an x-ray exposure capability of 5 x 10/sup 12/ rads/s (Si) and 5 cal/g (Au) over 500 cm/sup 2/.
Accelerated Molecular Dynamics Simulations of Reactive Hydrocarbon Systems
Stuart, Steven J.
2014-02-25
The research activities in this project consisted of four different sub-projects. Three different accelerated dynamics techniques (parallel replica dynamics, hyperdynamics, and temperature-accelerated dynamics) were applied to the modeling of pyrolysis of hydrocarbons. In addition, parallel replica dynamics was applied to modeling of polymerization.
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya
2014-05-01
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei
2014-05-14
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of
Simulation of SEP Acceleration and Transport at CME-driven Shocks
Kota, J.; Jokipii, J.R.; Manchester, W.B.; Zeeuw, D.L. de; Gombosi, T.I.
2005-08-01
Our code of solar energetic particle (SEP) acceleration and transport developed in Arizona is combined with the realistic CME simulations of Michigan, using the solar wind and magnetic field data of the Michigan CME-simulation as input to the SEP code. We suggest that, in addition to the acceleration at the shock significant acceleration may also occur in the sheet behind the shock, where magnetic field lines are compressed as they are bent around the expanding cloud. We consider field aligned motion and cast the proper Fokker-Planck equation into a non-inertial comoving frame, that follows field lines as they evolve. Illustrative simulation results are presented.
Final Report for "Community Petascale Project for Accelerator Science and Simulations".
Cary, J. R.; Bruhwiler, D. L.; Stoltz, P. H.; Cormier-Michel, E.; Cowan, B.; Schwartz, B. T.; Bell, G.; Paul, K.; Veitzer, S.
2013-04-19
This final report describes the work that has been accomplished over the past 5 years under the Community Petascale Project for Accelerator and Simulations (ComPASS) at Tech-X Corporation. Tech-X had been involved in the full range of ComPASS activities with simulation of laser plasma accelerator concepts, mainly in collaboration with LOASIS program at LBNL, simulation of coherent electron cooling in collaboration with BNL, modeling of electron clouds in high intensity accelerators, in collaboration with researchers at Fermilab and accurate modeling of superconducting RF cavity in collaboration with Fermilab, JLab and Cockcroft Institute in the UK.
Crabtree, George; Glotzer, Sharon; McCurdy, Bill; Roberto, Jim
2010-07-26
enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together
Numerical simulations of the superdetonative ram accelerator combusting flow field
NASA Technical Reports Server (NTRS)
Soetrisno, Moeljo; Imlay, Scott T.; Roberts, Donald W.
1993-01-01
The effects of projectile canting and fins on the ram accelerator combusting flowfield and the possible cause of the ram accelerator unstart are investigated by performing axisymmetric, two-dimensional, and three-dimensional calculations. Calculations are performed using the INCA code for solving Navier-Stokes equations and a guasi-global combustion model of Westbrook and Dryer (1981, 1984), which includes N2 and nine reacting species (CH4, CO, CO2, H2, H, O2, O, OH, and H2O), which are allowed to undergo a 12-step reaction. It is found that, without canting, interactions between the fins, boundary layers, and combustion fronts are insufficient to unstart the projectile at superdetonative velocities. With canting, the projectile will unstart at flow conditions where it appears to accelerate without canting. Unstart occurs at some critical canting angle. It is also found that three-dimensionality plays an important role in the overall combustion process.
Reactor for simulation and acceleration of solar ultraviolet damage
NASA Technical Reports Server (NTRS)
Laue, E.; Gupta, A.
1979-01-01
An environmental test chamber providing acceleration of UV radiation and precise temperature control (+ or -)1 C was designed, constructed and tested. This chamber allows acceleration of solar ultraviolet up to 30 suns while maintaining temperature of the absorbing surface at 30 C - 60 C. This test chamber utilizes a filtered medium pressure mercury arc as the source of radiation, and a combination of selenium radiometer and silicon radiometer to monitor solar ultraviolet (295-340 nm) and total radiant power output, respectively. Details of design and construction and operational procedures are presented along with typical test data.
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
Acceleration of a QM/MM-QMC simulation using GPU.
Uejima, Yutaka; Terashima, Tomoharu; Maezono, Ryo
2011-07-30
We accelerated an ab initio molecular QMC calculation by using GPGPU. Only the bottle-neck part of the calculation is replaced by CUDA subroutine and performed on GPU. The performance on a (single core CPU + GPU) is compared with that on a (single core CPU with double precision), getting 23.6 (11.0) times faster calculations in single (double) precision treatments on GPU. The energy deviation caused by the single precision treatment was found to be within the accuracy required in the calculation, ∼10(-5) hartree. The accelerated computational nodes mounting GPU are combined to form a hybrid MPI cluster on which we confirmed the performance linearly scales to the number of nodes.
Constraint methods that accelerate free-energy simulations of biomolecules
MacCallum, Justin L.; Dill, Ken A.
2015-01-01
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628
Constraint methods that accelerate free-energy simulations of biomolecules
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Constraint methods that accelerate free-energy simulations of biomolecules
NASA Astrophysics Data System (ADS)
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-01
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
O'Sullivan, Ciara C; Connolly, Roisin M
2014-03-01
The addition of trastuzumab, a monoclonal antibody to human epidermal growth factor receptor 2 (HER2), to standard chemotherapy in patients with HER2-positive breast cancer has resulted in major improvements in breast cancer outcomes, including improved survival, in both the adjuvant and metastatic settings. However, some patients experience disease relapse despite adjuvant trastuzumab-containing therapy, and resistance to trastuzumab develops in the majority of patients in the metastatic setting. An understanding of the molecular mechanisms underlying trastuzumab resistance has aided the development of novel HER2-targeted therapies. In June 2012, the HER2 dimerization inhibitor pertuzumab was approved by the US Food and Drug Administration (FDA) for use with chemotherapy and trastuzumab in the first-line treatment of metastatic HER2-positive breast cancer. In September 2013, accelerated approval was granted for use of pertuzumab in the neoadjuvant setting, representing a landmark decision by the FDA. This article discusses the development of pertuzumab to date, with a particular focus on the accelerated approval decision. We highlight the need to identify reliable biomarkers of sensitivity and resistance to HER2-targeted therapy, which would make possible the individualization of treatment for patients with HER2-positive breast cancer.
Accelerated Laboratory Research Experience in Psychology through Simulation.
ERIC Educational Resources Information Center
Chatfield, Douglas C.; Cruse, Bradley H.
1986-01-01
Describes implementation of computer simulation to aid in training psychology students in research methodology. Four skills required in research are reviewed; the simulation's context and the software used are described; and student activities, including submission of articles to online class journals and students' responses to the method, are…
Goetsch, Steven J.
2008-05-01
Intracranial stereotactic radiosurgery has been practiced since 1951. The technique has expanded from a single dedicated unit in Stockholm in 1968 to hundreds of centers performing an estimated 100,000 Gamma Knife and linear accelerator cases in 2005. The radiation dosimetry of small photon fields used in this technique has been well explored in the past 15 years. Quality assurance recommendations have been promulgated in refereed reports and by several national and international professional societies since 1991. The field has survived several reported treatment errors and incidents, generally reacting by strengthening standards and precautions. An increasing number of computer-controlled and robotic-dedicated treatment units are expanding the field and putting patients at risk of unforeseen errors. Revisions and updates to previously published quality assurance documents, and especially to radiation dosimetry protocols, are now needed to ensure continued successful procedures that minimize the risk of serious errors.
NASA Astrophysics Data System (ADS)
Yin, L.; Stark, D. J.; Albright, B. J.
2016-10-01
Laser-ion acceleration via relativistic induced transparency provides an effective means to accelerate ions to tens of MeV/nucleon over distances of 10s of μm. These ion sources may enable a host of applications, from fast ignition and x-rays sources to medical treatments. Understanding whether two-dimensional (2D) PIC simulations can capture the relevant 3D physics is important to the development of a predictive capability for short-pulse laser-ion acceleration and for economical design studies for applications of these accelerators. In this work, PIC simulations are performed in 3D and in 2D where the direction of the laser polarization is in the simulation plane (2D-P) and out-of-plane (2D-S). Our studies indicate modeling sensitivity to dimensionality and laser polarization. Differences arise in energy partition, electron heating, ion peak energy, and ion spectral shape. 2D-P simulations are found to over-predict electron heating and ion peak energy. The origin of these differences and the extent to which 2D simulations may capture the key acceleration dynamics will be discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Los Alamos National Laboratory Directed Research and Development Program.
Numerical Simulations of the NRL Collective Particle Accelerator.
1983-11-01
observable experimentally. It is worth noting as well that an IREB propagating through a rippled magnetic field can produce significant microwave ...insulation, diode operation, microwave tube design, etc. MAGIC is an intermediate size code (about 20,000 statements), and is highly optimized both for user...ACCELERATOR PHASE-SPACE PLOT OF P1 VS. XI AT TIMlE ’.20E-09 SEC SPECIES NUMBER 1 OIM RATIO -1.759E+11 X2 WINDOW l.OOE-02 TO 2.40E-02 *109P2 WINDOW -1.OOE+l0 TO
Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations
NASA Astrophysics Data System (ADS)
Di Staso, G.; Clercx, H. J. H.; Succi, S.; Toschi, F.
2016-11-01
Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator. This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'.
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.
2006-01-01
Nonthermal radiation observed from astrophysical systems containing (relativistic) jets and shocks, e.g., supernova remnants, active galactic nuclei (AGNs), gamma-ray bursts (GRBs), and Galactic microquasar systems usually have power-law emission spectra. Fermi acceleration is the mechanism usually assumed for the acceleration of particles in astrophysical environments. Recent PIC simulations using injected relativistic electron-ion (electro-positron) jets show that acceleration occurs within the downstream jet, rather than by the scattering of particles back and forth across the shock as in Fermi acceleration. Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, other two-streaming instability, and the Weibel instability) created in the .shocks are responsible for particle (electron, positron, and ion) acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection behind the jet head. The "jitter" radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants. We will review recent PIC simulations which show particle acceleration in jets.
Yue, Jingwei; Zhou, Zongtan; Jiang, Jun; Liu, Yadong; Hu, Dewen
2012-08-30
Most brain-computer interfaces (BCIs) are non-time-restraint systems. However, the method used to design a real-time BCI paradigm for controlling unstable devices is still a challenging problem. This paper presents a real-time feedback BCI paradigm for controlling an inverted pendulum on a cart (IPC). In this paradigm, sensorimotor rhythms (SMRs) were recorded using 15 active electrodes placed on the surface of the subject's scalp. Subsequently, common spatial pattern (CSP) was used as the basic filter to extract spatial patterns. Finally, linear discriminant analysis (LDA) was used to translate the patterns into control commands that could stabilize the simulated inverted pendulum. Offline trainings were employed to teach the subjects to execute corresponding mental tasks, such as left/right hand motor imagery. Five subjects could successfully balance the online inverted pendulum for more than 35s. The results demonstrated that BCIs are able to control nonlinear unstable devices. Furthermore, the demonstration and extension of real-time continuous control might be useful for the real-life application and generalization of BCI.
Simulation of Laser Wake Field Acceleration using a 2.5D PIC Code
An, W. M.; Hua, J. F.; Huang, W. H.; Tang, Ch. X.; Lin, Y. Z.
2006-11-27
A 2.5D PIC simulation code is developed to study the LWFA( Laser WakeField Acceleration ). The electron self-injection and the generation of mono-energetic electron beam in LWFA is briefly discussed through the simulation. And the experiment of this year at SILEX-I laser facility is also introduced.
Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL
Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor
2011-09-06
We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.
ELECTROMAGNETIC AND THERMAL SIMULATIONS FOR THE SWITCH REGION OF A COMPACT PROTON ACCELERATOR
Wang, L; Caporaso, G J; Sullivan, J S
2007-06-15
A compact proton accelerator for medical applications is being developed at Lawrence Livermore National Laboratory. The accelerator architecture is based on the dielectric wall accelerator (DWA) concept. One critical area to consider is the switch region. Electric field simulations and thermal calculations of the switch area were performed to help determine the operating limits of rmed SiC switches. Different geometries were considered for the field simulation including the shape of the thin Indium solder meniscus between the electrodes and SiC. Electric field simulations were also utilized to demonstrate how the field stress could be reduced. Both transient and steady steady-state thermal simulations were analyzed to find the average power capability of the switches.
Simulation Studies of the Dielectric Grating as an Accelerating and Focusing Structure
Soong, Ken; Peralta, E.A.; Byer, R.L.; Colby, E.; /SLAC
2011-08-12
A grating-based design is a promising candidate for a laser-driven dielectric accelerator. Through simulations, we show the merits of a readily fabricated grating structure as an accelerating component. Additionally, we show that with a small design perturbation, the accelerating component can be converted into a focusing structure. The understanding of these two components is critical in the successful development of any complete accelerator. The concept of accelerating electrons with the tremendous electric fields found in lasers has been proposed for decades. However, until recently the realization of such an accelerator was not technologically feasible. Recent advances in the semiconductor industry, as well as advances in laser technology, have now made laser-driven dielectric accelerators imminent. The grating-based accelerator is one proposed design for a dielectric laser-driven accelerator. This design, which was introduced by Plettner, consists of a pair of opposing transparent binary gratings, illustrated in Fig. 1. The teeth of the gratings serve as a phase mask, ensuring a phase synchronicity between the electromagnetic field and the moving particles. The current grating accelerator design has the drive laser incident perpendicular to the substrate, which poses a laser-structure alignment complication. The next iteration of grating structure fabrication seeks to monolithically create an array of grating structures by etching the grating's vacuum channel into a fused silica wafer. With this method it is possible to have the drive laser confined to the plane of the wafer, thus ensuring alignment of the laser-and-structure, the two grating halves, and subsequent accelerator components. There has been previous work using 2-dimensional finite difference time domain (2D-FDTD) calculations to evaluate the performance of the grating accelerator structure. However, this work approximates the grating as an infinite structure and does not accurately model a
FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation
NASA Astrophysics Data System (ADS)
Veltri, M.
2016-09-01
This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.
Numerical simulation of solar cosmic ray acceleration in reconnecting current sheets
NASA Astrophysics Data System (ADS)
Balabin, Yury; Podgorny, Igor; Podgorny, Alexander; Vashenyuk, Eduard
The set of neutron monitors measurements reveals two components of relativistic protons that accompaniment a flare. The prompt component of relativistic protons is created simultaneously with flare hard X-ray radiation. It possesses information about the mechanism of particle acceleration in a flare up to 10 GeV. Prompt component shows the exponential spectrum with W0 order of 0.5 GeV. The possibility of particle acceleration in a current sheet has been considered in the frame of the elctrodynamical solar flare model. Particles can get energy during acceleration in the Lorenz electric field along a singular line. The similar mechanism of acceleration has been observed in the powerful pinch discharge. In previous simulation works it has been shown that exponential spectrum appears, if the electric field is applied along a magnetic symmetrical X-type singular line. Such simulation can be considered as a first step for reality, because the real field distribution is much more complicated. Now numerical simulations have been carried out for the real magnetic and electric configurations calculated in MHD numerical experiments for the famous Bastille flare. The result of simulation shows that the spectrum of accelerated protons during a flare indeed is the exponential one. From comparison of simulation results with observable spectra of solar protons the rate of reconnection of order of 107 cm/s for W0 0.5 GeV is estimated.
Beam dynamics simulations of post low energy beam transport section in RAON heavy ion accelerator
Jin, Hyunchang Jang, Ji-Ho; Jang, Hyojae; Hong, In-Seok
2016-02-15
RAON (Rare isotope Accelerator Of Newness) heavy ion accelerator of the rare isotope science project in Daejeon, Korea, has been designed to accelerate multiple-charge-state beams to be used for various science programs. In the RAON accelerator, the rare isotope beams which are generated by an isotope separation on-line system with a wide range of nuclei and charges will be transported through the post Low Energy Beam Transport (LEBT) section to the Radio Frequency Quadrupole (RFQ). In order to transport many kinds of rare isotope beams stably to the RFQ, the post LEBT should be devised to satisfy the requirement of the RFQ at the end of post LEBT, simultaneously with the twiss parameters small. We will present the recent lattice design of the post LEBT in the RAON accelerator and the results of the beam dynamics simulations from it. In addition, the error analysis and correction in the post LEBT will be also described.
Computer simulation of the coupling slots effects for on-axis coupled accelerating structures.
NASA Astrophysics Data System (ADS)
Salakhoutdinov, A. F.; Shvedunov, V. I.
1997-05-01
The presence of coupling elements in accelerating structures leads to the violation of axial symmetry of accelerating field and it may cause displacement, defocusing and non-linear distortion of phase space. As a result the growth of transverse emittance occures. From the other hand, these effects may be used for designing of RF- focusing accelerating structure for electron accelerators of various types. The numerical simulation of electrodynamical properties of on-axis coupled accelerating structure taking into account the coupling slots have been made. The characteristics of fields excited within the coupling cell have been investigated. The numerical estimations of various multipolarity components of transverse forces acting upon a particle inside the coupling cell have been achieved.
The operant reserve: a computer simulation in (accelerated) real time.
Catania, A Charles
2005-05-31
In Skinner's Reflex Reserve theory, reinforced responses added to a reserve depleted by responding. It could not handle the finding that partial reinforcement generated more responding than continuous reinforcement, but it would have worked if its growth had depended not just on the last response but also on earlier responses preceding a reinforcer, each weighted by delay. In that case, partial reinforcement generates steady states in which reserve decrements produced by responding balance increments produced when reinforcers follow responding. A computer simulation arranged schedules for responses produced with probabilities proportional to reserve size. Each response subtracted a fixed amount from the reserve and added an amount weighted by the reciprocal of the time to the next reinforcer. Simulated cumulative records and quantitative data for extinction, random-ratio, random-interval, and other schedules were consistent with those of real performances, including some effects of history. The model also simulated rapid performance transitions with changed contingencies that did not depend on molar variables or on differential reinforcement of inter-response times. The simulation can be extended to inhomogeneous contingencies by way of continua of reserves arrayed along response and time dimensions, and to concurrent performances and stimulus control by way of different reserves created for different response classes.
New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.
Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu
2014-12-10
The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. (J. Chem. Phys.2006, 124, 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys.2004, 121, 10356; Chatterjee et al. J. Chem. Phys.2005, 122, 024112; Peng et al. J. Chem. Phys.2007, 126, 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys.2001, 115, 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.
Numerical Simulation of Laser-Driven In-Tube Accelerator Operation
Ohnishi, N.; Ogino, Y.; Sawada, K.; Ohtani, T.; Mori, K.; Sasoh, A.
2006-05-02
To achieve a higher thrust performance in the laser-driven in-tube accelerator operation, numerical analysises have been carried out. The computational code covers from the generation of the blast wave to its interactions with the projectile and the acceleration wall. The thrust history and the momentum coupling coefficient evaluated from the numerical simulation depend on the fill pressure and the projectile shape. The confinement effect can be clearly found using the projectile attached with a shroud.
NASA Astrophysics Data System (ADS)
Ribstein, Bruno; Bölöni, Gergely; Muraschko, Jewgenija; Sgoff, Christine; Wei, Junhong; Achatz, Ulrich
2016-11-01
With the aim of contributing to the improvement of subgrid-scale gravity wave (GW) parameterizations in numerical-weather-prediction and climate models, the comparative relevance in GW drag of direct GW-mean-flow interactions and turbulent wave breakdown are investigated. Of equal interest is how well Wentzel-Kramer-Brillouin (WKB) theory can capture direct wave-mean-flow interactions, that are excluded by applying the steady-state approximation. WKB is implemented in a very efficient Lagrangian ray-tracing approach that considers wave action density in phasespace, thereby avoiding numerical instabilities due to caustics. It is supplemented by a simple wave-breaking scheme based on a static-instability saturation criterion. Idealized test cases of horizontally homogeneous GW packets are considered where wave-resolving Large-Eddy Simulations (LES) provide the reference. In all of theses cases the WKB simulations including direct GW-mean-flow interactions reproduce the LES data, to a good accuracy, already without wave-breaking scheme. The latter provides a next-order correction that is useful for fully capturing the total-energy balance between wave and mean flow. This is not the case when a steady-state WKB implementation is used, as used in present GW parameterizations.
3D Simulations for a Micron-Scale, Dielectric-Based Acceleration Experiment
Yoder, R. B.; Travish, G.; Xu Jin; Rosenzweig, J. B.
2009-01-22
An experimental program to demonstrate a dielectric, slab-symmetric accelerator structure has been underway for the past two years. These resonant devices are driven by a side-coupled 800-nm laser and can be configured to maintain the field profile necessary for synchronous acceleration and focusing of relativistic or nonrelativistic particles. We present 3D simulations of various versions of the structure geometry, including a metal-walled structure relevant to ongoing cold tests on resonant properties, and an all-dielectric structure to be constructed for a proof-of-principle acceleration experiment.
Microparticle accelerator of unique design. [for micrometeoroid impact and cratering simulation
NASA Technical Reports Server (NTRS)
Vedder, J. F.
1978-01-01
A microparticle accelerator has been devised for micrometeoroid impact and cratering simulation; the device produces high-velocity (0.5-15 km/sec), micrometer-sized projectiles of any cohesive material. In the source, an electrodynamic levitator, single particles are charged by ion bombardment in high vacuum. The vertical accelerator has four drift tubes, each initially at a high negative voltage. After injection of the projectile, each tube is grounded in turn at a time determined by the voltage and charge/mass ratio to give four acceleration stages with a total voltage equivalent to about 1.7 MV.
Capture and Control of Laser-Accelerated Proton Beams: Experiment and Simulation
Nurnberg, F; Alber, I; Harres, K; Schollmeier, M; Roth, M; Barth, W; Eickhoff, H; Hofmann, I; Friedman, A; Grote, D; Logan, B G
2009-05-13
This paper summarizes the ongoing studies on the possibilities for transport and RF capture of laser-accelerated proton beams in conventional accelerator structures. First results on the capture of laser-accelerated proton beams are presented, supported by Trace3D, CST particle studio and Warp simulations. Based on these results, the development of the pulsed high-field solenoid is guided by our desire to optimize the output particle number for this highly divergent beam with an exponential energy spectrum. A future experimental test stand is proposed to do studies concerning the application as a new particle source.
Accelerated discovery of OLED materials through atomic-scale simulation
NASA Astrophysics Data System (ADS)
Halls, Mathew D.; Giesen, David J.; Hughes, Thomas F.; Goldberg, Alexander; Cao, Yixiang; Kwak, H. Shaun; Mustard, Thomas J.; Gavartin, Jacob
2016-09-01
Organic light-emitting diode (OLED) devices are under widespread investigation to displace or complement inorganic optoelectronic devices for solid-state lighting and active displays. The materials in these devices are selected or designed according to their intrinsic and extrinsic electronic properties with concern for efficient charge injection and transport, and desired stability and light emission characteristics. The chemical design space for OLED materials is enormous and there is need for the development of computational approaches to help identify the most promising solutions for experimental development. In this work we will present examples of simulation approaches available to efficiently screen libraries of potential OLED materials; including first-principles prediction of key intrinsic properties, and classical simulation of amorphous morphology and stability. Also, an alternative to exhaustive computational screening is introduced based on a biomimetic evolutionary framework; evolving the molecular structure in the calculated OLED property design space.
Mainstreaming Modeling and Simulation to Accelerate Public Health Innovation
Sepulveda, Martin-J.; Mabry, Patricia L.
2014-01-01
Dynamic modeling and simulation are systems science tools that examine behaviors and outcomes resulting from interactions among multiple system components over time. Although there are excellent examples of their application, they have not been adopted as mainstream tools in population health planning and policymaking. Impediments to their use include the legacy and ease of use of statistical approaches that produce estimates with confidence intervals, the difficulty of multidisciplinary collaboration for modeling and simulation, systems scientists’ inability to communicate effectively the added value of the tools, and low funding for population health systems science. Proposed remedies include aggregation of diverse data sets, systems science training for public health and other health professionals, changing research incentives toward collaboration, and increased funding for population health systems science projects. PMID:24832426
GPU accelerated numerical simulations of viscoelastic phase separation model.
Yang, Keda; Su, Jiaye; Guo, Hongxia
2012-07-05
We introduce a complete implementation of viscoelastic model for numerical simulations of the phase separation kinetics in dynamic asymmetry systems such as polymer blends and polymer solutions on a graphics processing unit (GPU) by CUDA language and discuss algorithms and optimizations in details. From studies of a polymer solution, we show that the GPU-based implementation can predict correctly the accepted results and provide about 190 times speedup over a single central processing unit (CPU). Further accuracy analysis demonstrates that both the single and the double precision calculations on the GPU are sufficient to produce high-quality results in numerical simulations of viscoelastic model. Therefore, the GPU-based viscoelastic model is very promising for studying many phase separation processes of experimental and theoretical interests that often take place on the large length and time scales and are not easily addressed by a conventional implementation running on a single CPU.
Simulation of imaging radar using graphics hardware acceleration
NASA Astrophysics Data System (ADS)
Peinecke, Niklas; Döhler, Hans-Ullrich; Korn, Bernd R.
2008-04-01
Extending previous works by Doehler and Bollmeyer we describe a new implementation of an imaging radar simulator. Our approach is based on using modern computer graphics hardware making heavy use of recent technologies like vertex and fragment shaders. Furthermore, to allow for a nearly realistic image we generate radar shadows implementing shadow map techniques in the programmable graphics hardware. The particular implementation is tailored to imitate millimeter wave (MMW) radar but could be extended for other types of radar systems easily.
Community Project for Accelerator Science and Simulation (ComPASS)
Simmons, Christopher; Carey, Varis
2016-10-12
After concluding our initial exercise (solving a simplified statistical inverse problem with unknown parameter laser intensity) of coupling Vorpal and our parallel statistical library QUESO, we shifted the application focus to DLA. Our efforts focused on developing a Gaussian process (GP) emulator within QUESO for efficient optimization of power couplers within woodpiles. The smaller simulation size (compared with LPA) allows for sufficient “training runs” to develop a reasonable GP statistical emulator for a parameter space of moderate dimension.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Investigation on Accelerating Dust Storm Simulation via Domain Decomposition Methods
NASA Astrophysics Data System (ADS)
Yu, M.; Gui, Z.; Yang, C. P.; Xia, J.; Chen, S.
2014-12-01
Dust storm simulation is a data and computing intensive process, which requires high efficiency and adequate computing resources. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. However, it is still a question worthy of consideration that how to allocate these subdomain processes into computing nodes without introducing imbalanced task loads and unnecessary communications among computing nodes. Here we propose a domain decomposition and allocation framework that can carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. The framework is tested in the NMM (Nonhydrostatic Mesoscale Model)-dust model, where a 72-hour processes of the dust load are simulated. Performance result using the proposed scheduling method is compared with the one using default scheduling methods of MPI. Results demonstrate that the system improves the performance of simulation by 20% up to 80%.
Particle in cell simulation of laser-accelerated proton beams for radiation therapy.
Fourkal, E; Shahine, B; Ding, M; Li, J S; Tajima, T; Ma, C M
2002-12-01
In this article we present the results of particle in cell (PIC) simulations of laser plasma interaction for proton acceleration for radiation therapy treatments. We show that under optimal interaction conditions protons can be accelerated up to relativistic energies of 300 MeV by a petawatt laser field. The proton acceleration is due to the dragging Coulomb force arising from charge separation induced by the ponderomotive pressure (light pressure) of high-intensity laser. The proton energy and phase space distribution functions obtained from the PIC simulations are used in the calculations of dose distributions using the GEANT Monte Carlo simulation code. Because of the broad energy and angular spectra of the protons, a compact particle selection and beam collimation system will be needed to generate small beams of polyenergetic protons for intensity modulated proton therapy.
NASA Astrophysics Data System (ADS)
Dieckmann, M. E.; Rowlands, G.; Eliasson, B.; Shukla, P. K.
2004-12-01
We examine the electron acceleration by a localized electrostatic potential oscillating at high frequencies by means of particle-in-cell (PIC) simulations, in which we apply oscillating electric fields to two neighboring simulation cells. We derive an analytic model for the direct electron heating by the externally driven antenna electric field, and we confirm that it approximates well the electron heating obtained in the simulations. In the simulations, transient waves accelerate electrons in a sheath surrounding the antenna. This increases the Larmor radii of the electrons close to the antenna, and more electrons can reach the antenna location to interact with the externally driven fields. The resulting hot electron sheath is dense enough to support strong waves that produce high-energy sounder-accelerated electrons (SAEs) by their nonlinear interaction with the ambient electrons. By increasing the emission amplitudes in our simulations to values that are representative for the ones of the sounder on board the OEDIPUS C (OC) satellites, we obtain electron acceleration into the energy range which is comparable to the 20 keV energies of the SAE observed by the OC mission. The emission also triggers stable electrostatic waves oscillating at frequencies close to the first harmonic of the electron cyclotron frequency. We find this to be an encouraging first step of examining SAE generation with kinetic numerical simulation codes.
MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes
Carlone, M; Lamey, M; Anderson, R; MacPherson, M
2014-06-15
Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of an electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.
Electric field simulation and measurement of a pulse line ion accelerator
NASA Astrophysics Data System (ADS)
Shen, Xiao-Kang; Zhang, Zi-Min; Cao, Shu-Chun; Zhao, Hong-Wei; Wang, Bo; Shen, Xiao-Li; Zhao, Quan-Tang; Liu, Ming; Jing, Yi
2012-07-01
An oil dielectric helical pulse line to demonstrate the principles of a Pulse Line Ion Accelerator (PLIA) has been designed and fabricated. The simulation of the axial electric field of an accelerator with CST code has been completed and the simulation results show complete agreement with the theoretical calculations. To fully understand the real value of the electric field excited from the helical line in PLIA, an optical electric integrated electric field measurement system was adopted. The measurement result shows that the real magnitude of axial electric field is smaller than that calculated, probably due to the actual pitch of the resister column which is much less than that of helix.
Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations
Paul, K.; Bruhwiler, D. L.; Cowan, B.; Cary, J. R.; Huang, C.; Mori, W. B.; Tsung, F. S.; Cormier-Michel, E.; Geddes, C. G. R.; Esarey, E.; Fonseca, R. A.; Martins, S. F.; Silva, L. O.
2009-01-22
Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a{sub 0} and that full and reduced PIC agree well for values of a{sub 0} approaching 4.
Accelerated GPU simulation of compressible flow by the discontinuous evolution Galerkin method
NASA Astrophysics Data System (ADS)
Block, B. J.; Lukáčová-Medvid'ová, M.; Virnau, P.; Yelash, L.
2012-08-01
The aim of the present paper is to report on our recent results for GPU accelerated simulations of compressible flows. For numerical simulation the adaptive discontinuous Galerkin method with the multidimensional bicharacteristic based evolution Galerkin operator has been used. For time discretization we have applied the explicit third order Runge-Kutta method. Evaluation of the genuinely multidimensional evolution operator has been accelerated using the GPU implementation. We have obtained a speedup up to 30 (in comparison to a single CPU core) for the calculation of the evolution Galerkin operator on a typical discretization mesh consisting of 16384 mesh cells.
SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator
Uselmann, A; Mackie, T
2014-06-01
Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated in order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at
WarpIV: In Situ Visualization and Analysis of Ion Accelerator Simulations.
Rubel, Oliver; Loring, Burlen; Vay, Jean-Luc; Grote, David P; Lehe, Remi; Bulanov, Stepan; Vincenti, Henri; Bethel, E Wes
2016-01-01
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analytics to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. This supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; Grote, David P.; Lehe, Remi; Bulanov, Stepan; Vincenti, Henri; Bethel, E. Wes
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analytics to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
Accelerated Monte Carlo models to simulate fluorescence spectra from layered tissues.
Swartling, Johannes; Pifferi, Antonio; Enejder, Annika M K; Andersson-Engels, Stefan
2003-04-01
Two efficient Monte Carlo models are described, facilitating predictions of complete time-resolved fluorescence spectra from a light-scattering and light-absorbing medium. These are compared with a third, conventional fluorescence Monte Carlo model in terms of accuracy, signal-to-noise statistics, and simulation time. The improved computation efficiency is achieved by means of a convolution technique, justified by the symmetry of the problem. Furthermore, the reciprocity principle for photon paths, employed in one of the accelerated models, is shown to simplify the computations of the distribution of the emitted fluorescence drastically. A so-called white Monte Carlo approach is finally suggested for efficient simulations of one excitation wavelength combined with a wide range of emission wavelengths. The fluorescence is simulated in a purely scattering medium, and the absorption properties are instead taken into account analytically afterward. This approach is applicable to the conventional model as well as to the two accelerated models. Essentially the same absolute values for the fluorescence integrated over the emitting surface and time are obtained for the three models within the accuracy of the simulations. The time-resolved and spatially resolved fluorescence exhibits a slight overestimation at short delay times close to the source corresponding to approximately two grid elements for the accelerated models, as a result of the discretization and the convolution. The improved efficiency is most prominent for the reverse-emission accelerated model, for which the simulation time can be reduced by up to two orders of magnitude.
Direct numerical simulation of turbulence using GPU accelerated supercomputers
NASA Astrophysics Data System (ADS)
Khajeh-Saeed, Ali; Blair Perot, J.
2013-02-01
Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.
Accelerated Monte Carlo simulations with restricted Boltzmann machines
NASA Astrophysics Data System (ADS)
Huang, Li; Wang, Lei
2017-01-01
Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; Shin, Youngmin; Mourou, Gerard; Wheeler, Jonathan; Taborek, Peter; Chen, Pisin; Dollar, Franklin; Shen, Baifei
2016-10-18
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In addition to particle acceleration, this scheme can also induce the emission of high energy photons at ~O(10–100) MeV. Here, our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; ...
2016-10-18
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In additionmore » to particle acceleration, this scheme can also induce the emission of high energy photons at ~O(10–100) MeV. Here, our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.« less
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
NASA Astrophysics Data System (ADS)
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; Shin, Youngmin; Mourou, Gerard; Wheeler, Jonathan; Taborek, Peter; Chen, Pisin; Dollar, Franklin; Shen, Baifei
2016-10-01
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV /cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In addition to particle acceleration, this scheme can also induce the emission of high energy photons at ˜O (10 - 100 ) MeV . Our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Simulator for an Accelerator-Driven Subcritical Fissile Solution System
Klein, Steven Karl; Day, Christy M.; Determan, John C.
2015-09-14
LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.
Accelerated finite element elastodynamic simulations using the GPU
Huthwaite, Peter
2014-01-15
An approach is developed to perform explicit time domain finite element simulations of elastodynamic problems on the graphical processing unit, using Nvidia's CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy’ partitioner and a new, more efficient ‘aligned’ partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The software is applied to three models from the fields of non-destructive testing, vibrations and geophysics, demonstrating a memory bandwidth of very close to the card's maximum, reflecting the bandwidth-limited nature of the algorithm. Comparison with Abaqus, a widely used commercial CPU equivalent, validated the accuracy of the results and demonstrated a speed improvement of around two orders of magnitude. A software package, Pogo, incorporating these developments, is released open source, downloadable from (http://www.pogo-fea.com/) to benefit the community. -- Highlights: •A novel memory arrangement approach is discussed for finite elements on the GPU. •The mesh is partitioned then nodes are arranged efficiently within each partition. •Models from ultrasonics, vibrations and geophysics are run. •The code is significantly faster than an equivalent commercial CPU package. •Pogo, the new software package, is released open source.
Particle-in-cell simulations of plasma accelerators and electron-neutral collisions
Bruhwiler, David L.; Giacone, Rodolfo E.; Cary, John R.; Verboncoeur, John P.; Mardahl, Peter; Esarey, Eric; Leemans, W.P.; Shadwick, B.A.
2001-10-01
We present 2-D simulations of both beam-driven and laser-driven plasma wakefield accelerators, using the object-oriented particle-in-cell code XOOPIC, which is time explicit, fully electromagnetic, and capable of running on massively parallel supercomputers. Simulations of laser-driven wakefields with low ({approx}10{sup 16} W/cm{sup 2}) and high ({approx}10{sup 18} W/cm{sup 2}) peak intensity laser pulses are conducted in slab geometry, showing agreement with theory and fluid simulations. Simulations of the E-157 beam wakefield experiment at the Stanford Linear Accelerator Center, in which a 30 GeV electron beam passes through 1 m of preionized lithium plasma, are conducted in cylindrical geometry, obtaining good agreement with previous work. We briefly describe some of the more significant modifications of XOOPIC required by this work, and summarize the issues relevant to modeling relativistic electron-neutral collisions in a particle-in-cell code.
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Mizuno, Y.; Hardee, P.; Hededal, C. B.; Fishman, G. J.
2006-01-01
Recent PIC simulations using injected relativistic electron-ion (electro-positron) jets into ambient plasmas show that acceleration occurs in relativistic shocks. The Weibel instability created in shocks is responsible for particle acceleration, and generation and amplification of highly inhomogeneous, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection in relativistic jets. The "jitter" radiation from deflected electrons has different properties than the synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understand the complex time evolution and spectral structure in relativistic jets and gamma-ray bursts. We will present recent PIC simulations which show particle acceleration and magnetic field generation. We will also calculate associated self-consistent emission from relativistic shocks.
The changing paradigm for integrated simulation in support of Command and Control (C2)
NASA Astrophysics Data System (ADS)
Riecken, Mark; Hieb, Michael
2016-05-01
Modern software and network technologies are on the verge of enabling what has eluded the simulation and operational communities for more than two decades, truly integrating simulation functionality into operational Command and Control (C2) capabilities. This deep integration will benefit multiple stakeholder communities from experimentation and test to training by providing predictive and advanced analytics. There is a new opportunity to support operations with simulation once a deep integration is achieved. While it is true that doctrinal and acquisition issues remain to be addressed, nonetheless it is increasingly obvious that few technical barriers persist. How will this change the way in which common simulation and operational data is stored and accessed? As the Services move towards single networks, will there be technical and policy issues associated with sharing those operational networks with simulation data, even if the simulation data is operational in nature (e.g., associated with planning)? How will data models that have traditionally been simulation only be merged in with operational data models? How will the issues of trust be addressed?
Numerical simulations of Rayleigh-Taylor (RT) turbulence with complex acceleration history
NASA Astrophysics Data System (ADS)
Ramaprabhu, Praveen; Dimonte, Guy; Andrews, Malcolm
2007-11-01
Complex acceleration histories of an RT unstable interface are important in validating turbulent mix models. Of particular interest are alternating stages of acceleration and deceleration, since the the associated demixing is a discriminating test of such models. We have performed numerical simulations of a turbulent RT mixing layer subjected to two stages of acceleration separated by a stage of deceleration. The profile was chosen from earlier Linear Electric Motor experiments with which we compare our results. The acceleration phases produce classical RT unstable growth (t^2) with growth rates comparable to earlier results of turbulent RT simulations. The calculations are challenging as dominant bubbles become shredded as they reverse direction in response to the reversal in g, placing increased demands on numerical resolution. The shredding to small scales is accompanied by a peaking of the molecular mixing during the RT stable stage. In general, we find that simulations agree with experiments when initialized with broadband initial perturbations, but not for an annular shell. Other effects such as the presence of surface tension in the LEM experiments (but not in our simulations) further complicate this picture.
NASA Astrophysics Data System (ADS)
Cipiccia, S.; Reboredo, D.; Vittoria, Fabio A.; Welsh, G. H.; Grant, P.; Grant, D. W.; Brunetti, E.; Wiggins, S. M.; Olivo, A.; Jaroszynski, D. A.
2015-05-01
X-ray phase contrast imaging (X-PCi) is a very promising method of dramatically enhancing the contrast of X-ray images of microscopic weakly absorbing objects and soft tissue, which may lead to significant advancement in medical imaging with high-resolution and low-dose. The interest in X-PCi is giving rise to a demand for effective simulation methods. Monte Carlo codes have been proved a valuable tool for studying X-PCi including coherent effects. The laser-plasma wakefield accelerators (LWFA) is a very compact particle accelerator that uses plasma as an accelerating medium. Accelerating gradient in excess of 1 GV/cm can be obtained, which makes them over a thousand times more compact than conventional accelerators. LWFA are also sources of brilliant betatron radiation, which are promising for applications including medical imaging. We present a study that explores the potential of LWFA-based betatron sources for medical X-PCi and investigate its resolution limit using numerical simulations based on the FLUKA Monte Carlo code, and present preliminary experimental results.
Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation
NASA Astrophysics Data System (ADS)
Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.
2016-11-01
The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.
The changing face of surgical education: simulation as the new paradigm.
Scott, Daniel J; Cendan, Juan C; Pugh, Carla M; Minter, Rebecca M; Dunnington, Gary L; Kozar, Rosemary A
2008-06-15
Surgical simulation has evolved considerably over the past two decades and now plays a major role in training efforts designed to foster the acquisition of new skills and knowledge outside of the clinical environment. Numerous driving forces have fueled this fundamental change in educational methods, including concerns over patient safety and the need to maximize efficiency within the context of limited work hours and clinical exposure. The importance of simulation has been recognized by the major stake-holders in surgical education, and the Residency Review Committee has mandated that all programs implement skills training curricula in 2008. Numerous issues now face educators who must use these novel training methods. It is important that these individuals have a solid understanding of content, development, research, and implementation aspects regarding simulation. This paper highlights presentations about these topics from a panel of experts convened at the 2008 Academic Surgical Congress.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller that adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution
Accelerated stochastic and hybrid methods for spatial simulations of reaction diffusion systems
NASA Astrophysics Data System (ADS)
Rossinelli, Diego; Bayati, Basil; Koumoutsakos, Petros
2008-01-01
Spatial distributions characterize the evolution of reaction-diffusion models of several physical, chemical, and biological systems. We present two novel algorithms for the efficient simulation of these models: Spatial τ-Leaping ( Sτ-Leaping), employing a unified acceleration of the stochastic simulation of reaction and diffusion, and Hybrid τ-Leaping ( Hτ-Leaping), combining a deterministic diffusion approximation with a τ-Leaping acceleration of the stochastic reactions. The algorithms are validated by solving Fisher's equation and used to explore the role of the number of particles in pattern formation. The results indicate that the present algorithms have a nearly constant time complexity with respect to the number of events (reaction and diffusion), unlike the exact stochastic simulation algorithm which scales linearly.
D-leaping: Accelerating stochastic simulation algorithms for reactions with delays
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2009-09-01
We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.
Three-dimensional simulations of high-current beams in induction accelerators with WARP3d
Grote, D.P.; Friedman, A.; Haber, I.
1995-09-01
For many issues relevant to acceleration and propagation of heavy-ion beams for inertial confinement fusion, understanding the behavior of the beam requires the self-consistent inclusion of the self-fields of the beams in multiple dimensions. For these reasons, the three-dimensional simulation code WARP3d A.Friedman was developed. The code combines the particle-in-cell plasma simulation technique with a realistic description of the elements which make up an accelerator. In this paper, the general structure of the code is reviewed and details of two ongoing applications are presented along with a discussion of simulation techniques used. The most important results of this work are presented.
NASA Astrophysics Data System (ADS)
Sonnad, Kiran G.; Hammond, Kenneth C.; Schwartz, Robert M.; Veitzer, Seth A.
2014-08-01
The use of transverse electric (TE) waves has proved to be a powerful, noninvasive method for estimating the densities of electron clouds formed in particle accelerators. Results from the plasma simulation program VSim have served as a useful guide for experimental studies related to this method, which have been performed at various accelerator facilities. This paper provides results of the simulation and modeling work done in conjunction with experimental efforts carried out at the Cornell electron storage ring “Test Accelerator” (CESRTA). This paper begins with a discussion of the phase shift induced by electron clouds in the transmission of RF waves, followed by the effect of reflections along the beam pipe, simulation of the resonant standing wave frequency shifts and finally the effects of external magnetic fields, namely dipoles and wigglers. A derivation of the dispersion relationship of wave propagation for arbitrary geometries in field free regions with a cold, uniform cloud density is also provided.
Using graphics processing units to accelerate perturbation Monte Carlo simulation in a turbid medium
NASA Astrophysics Data System (ADS)
Cai, Fuhong; He, Sailing
2012-04-01
We report a fast perturbation Monte Carlo (PMC) algorithm accelerated by graphics processing units (GPU). The two-step PMC simulation [Opt. Lett. 36, 2095 (2011)] is performed by storing the seeds instead of the photon's trajectory, and thus the requirement in computer random-access memory (RAM) becomes minimal. The two-step PMC is extremely suitable for implementation onto GPU. In a standard simulation of spatially-resolved photon migration in the turbid media, the acceleration ratio between using GPU and using conventional CPU is about 1000. Furthermore, since in the two-step PMC algorithm one records the effective seeds, which is associated to the photon that reaches a region of interest in this letter, and then re-run the MC simulation based on the recorded effective seeds, radiative transfer equation (RTE) can be solved by two-step PMC not only with an arbitrary change in the absorption coefficient, but also with large change in the scattering coefficient.
Cai, Fuhong; He, Sailing
2012-04-01
We report a fast perturbation Monte Carlo (PMC) algorithm accelerated by graphics processing units (GPU). The two-step PMC simulation [Opt. Lett. 36, 2095 (2011)] is performed by storing the seeds instead of the photon's trajectory, and thus the requirement in computer random-access memory (RAM) becomes minimal. The two-step PMC is extremely suitable for implementation onto GPU. In a standard simulation of spatially-resolved photon migration in the turbid media, the acceleration ratio between using GPU and using conventional CPU is about 1000. Furthermore, since in the two-step PMC algorithm one records the effective seeds, which is associated to the photon that reaches a region of interest in this letter, and then re-run the MC simulation based on the recorded effective seeds, radiative transfer equation (RTE) can be solved by two-step PMC not only with an arbitrary change in the absorption coefficient, but also with large change in the scattering coefficient.
Simulation on buildup of electron cloud in a proton circular accelerator
NASA Astrophysics Data System (ADS)
Li, Kai-Wei; Liu, Yu-Dong
2015-10-01
Electron cloud interaction with high energy positive beams are believed responsible for various undesirable effects such as vacuum degradation, collective beam instability and even beam loss in high power proton circular accelerators. An important uncertainty in predicting electron cloud instability lies in the detailed processes of the generation and accumulation of the electron cloud. The simulation on the build-up of electron cloud is necessary to further studies on beam instability caused by electron clouds. The China Spallation Neutron Source (CSNS) is an intense proton accelerator facility now being built, whose accelerator complex includes two main parts: an H-linac and a rapid cycling synchrotron (RCS). The RCS accumulates the 80 MeV proton beam and accelerates it to 1.6 GeV with a repetition rate of 25 Hz. During beam injection with lower energy, the emerging electron cloud may cause serious instability and beam loss on the vacuum pipe. A simulation code has been developed to simulate the build-up, distribution and density of electron cloud in CSNS/RCS. Supported by National Natural Science Foundation of China (11275221, 11175193)
Audu, Musa L.; Kirsch, Robert F.; Triolo, Ronald J.
2013-01-01
The potential efficacy of total body center of mass (COM) acceleration for feedback control of standing balance by functional neuromuscular stimulation (FNS) following spinal cord injury (SCI) was investigated. COM acceleration may be a viable alternative to conventional joint kinematics due to its rapid responsiveness, focal representation of COM dynamics, and ease of measurement. A computational procedure was developed using an anatomically-realistic, three-dimensional, bipedal biomechanical model to determine optimal patterns of muscle excitations to produce targeted effects upon COM acceleration from erect stance. The procedure was verified with electromyographic data collected from standing able-bodied subjects undergoing systematic perturbations. Using 16 muscle groups targeted by existing implantable neuroprostheses, data were generated to train an artificial neural network (ANN)-based controller in simulation. During forward simulations, proportional feedback of COM acceleration drove the ANN to produce muscle excitation patterns countering the effects of applied perturbations. Feedback gains were optimized to minimize upper extremity (UE) loading required to stabilize against disturbances. Compared to the clinical case of maximum constant excitation, the controller reduced UE loading by 43% in resisting external perturbations and by 51% during simulated one-arm reaching. Future work includes performance assessment against expected measurement errors and developing user-specific control systems. PMID:22773529
Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; /SLAC /TechX Corp. /Fermilab
2008-08-01
SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES).
Experiments in sensing transient rotational acceleration cues on a flight simulator
NASA Technical Reports Server (NTRS)
Parrish, R. V.
1979-01-01
Results are presented for two transient motion sensing experiments which were motivated by the identification of an anomalous roll cue (a 'jerk' attributed to an acceleration spike) in a prior investigation of realistic fighter motion simulation. The experimental results suggest the consideration of several issues for motion washout and challenge current sensory system modeling efforts. Although no sensory modeling effort is made it is argued that such models must incorporate the ability to handle transient inputs of short duration (some of which are less than the accepted latency times for sensing), and must represent separate channels for rotational acceleration and velocity sensing.
Laser ion acceleration toward future ion beam cancer therapy - Numerical simulation study -
Kawata, Shigeo; Izumiyama, Takeshi; Nagashima, Toshihiro; Takano, Masahiro; Barada, Daisuke; Kong, Qing; Gu, Yan Jun; Wang, Ping Xiao; Ma, Yan Yun; Wang, Wei Min
2013-01-01
Background: Ion beam has been used in cancer treatment, and has a unique preferable feature to deposit its main energy inside a human body so that cancer cell could be killed by the ion beam. However, conventional ion accelerator tends to be huge in its size and its cost. In this paper a future intense-laser ion accelerator is proposed to make the ion accelerator compact. Subjects and methods: An intense femtosecond pulsed laser was employed to accelerate ions. The issues in the laser ion accelerator include the energy efficiency from the laser to the ions, the ion beam collimation, the ion energy spectrum control, the ion beam bunching and the ion particle energy control. In the study particle computer simulations were performed to solve the issues, and each component was designed to control the ion beam quality. Results: When an intense laser illuminates a target, electrons in the target are accelerated and leave from the target; temporarily a strong electric field is formed between the high-energy electrons and the target ions, and the target ions are accelerated. The energy efficiency from the laser to ions was improved by using a solid target with a fine sub-wavelength structure or by a near-critical density gas plasma. The ion beam collimation was realized by holes behind the solid target. The control of the ion energy spectrum and the ion particle energy, and the ion beam bunching were successfully realized by a multi-stage laser-target interaction. Conclusions: The present study proposed a novel concept for a future compact laser ion accelerator, based on each component study required to control the ion beam quality and parameters. PMID:24155555
Developing high energy, stable laser wakefield accelerators: particle simulations and experiments
NASA Astrophysics Data System (ADS)
Geddes, Cameron
2006-10-01
Laser driven wakefield accelerators produce accelerating fields thousands of times those achievable in conventional radiofrequency accelerators, and recent experiments have produced high energy electron bunches with low emittance and energy spread. Challenges now include control and reproducibility of the electron beam, further improvements in energy spread, and scaling to higher energies. We present large-scale particle in cell simulations together with recent experiments towards these goals. In LBNL experiments the relativistically intense drive pulse was guided over more than 10 diffraction ranges by plasma channels. Guiding beyond the diffraction range improved efficiency by allowing use of a smaller laser spot size (and hence higher intensities) over long propagation distances. At a drive pulse power of 9 TW, electrons were trapped from the plasma and beams of percent energy spread containing > 200pC charge above 80 MeV with normalized emittance estimated at < 2 π-mm-mrad were produced. Energies have now been scaled to 1 GeV using 40 TW of laser power. Particle simulations and data showed that the high quality bunch in recent experiments was formed when beam loading turned off injection after initial self trapping, creating a bunch of electrons isolated in phase space. A narrow energy spread beam was then obtained by extracting the bunch as it outran the accelerating phase of the wake. Large scale simulations coupled with experiments are now under way to better understand the optimization of such accelerators including production of reproducible electron beams and scaling to energies beyond a GeV. Numerical resolution and two and three dimensional effects are discussed as well as diagnostics for application of the simulations to experiments. Effects including injection and beam dynamics as well as pump laser depletion and reshaping will be described, with application to design of future experiments. Supported by DOE grant DE-AC02-05CH11231 and by an INCITE
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Particle-in-cell/accelerator code for space-charge dominated beam simulation
2012-05-08
Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.
NASA Technical Reports Server (NTRS)
Igenbergs, E. B.; Cour-Palais, B.; Fisher, E.; Stehle, O.
1975-01-01
A new concept for particle acceleration for micrometeoroid simulation was developed at NASA Marshall Space Flight Center, using a high-density self-luminescent fast plasma flow to accelerate glass beads (with a diameter up to 1.0 mm) to velocities between 15-20 km/sec. After a short introduction to the operation of the hypervelocity range, the eight-converter-camera unit used for the photographs of the plasma flow and the accelerated particles is described. These photographs are obtained with an eight-segment reflecting pyramidal beam splitter. Wratten filters were mounted between the beam splitter and the converter tubes of the cameras. The photographs, which were recorded on black and white film, were used to make the matrices for the dye-color process, which produced the prints shown.
Mean-state acceleration of cloud-resolving models and large eddy simulations
Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.
2015-10-29
In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate themore » evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.« less
Mean-state acceleration of cloud-resolving models and large eddy simulations
Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.
2015-10-29
In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate the evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
The 3-D numerical simulation research of vacuum injector for linear induction accelerator
NASA Astrophysics Data System (ADS)
Liu, Dagang; Xie, Mengjun; Tang, Xinbing; Liao, Shuqing
2017-01-01
Simulation method for voltage in-feed and electron injection of vacuum injector is given, and verification of the simulated voltage and current is carried out. The numerical simulation for the magnetic field of solenoid is implemented, and a comparative analysis is conducted between the simulation results and experimental results. A semi-implicit difference algorithm is adopted to suppress the numerical noise, and a parallel acceleration algorithm is used for increasing the computation speed. The RMS emittance calculation method of the beam envelope equations is analyzed. In addition, the simulated results of RMS emittance are compared with the experimental data. Finally, influences of the ferromagnetic rings on the radial and axial magnetic fields of solenoid as well as the emittance of beam are studied.
NASA Astrophysics Data System (ADS)
Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Zhang, X.-J.; Li, J.; Baker, D. N.; Reeves, G. D.; Spence, H. E.; Kletzing, C. A.; Kurth, W. S.; Hospodarsky, G. B.; Blake, J. B.; Fennell, J. F.; Kanekal, S. G.; Angelopoulos, V.; Green, J. C.; Goldstein, J.
2016-06-01
Various physical processes are known to cause acceleration, loss, and transport of energetic electrons in the Earth's radiation belts, but their quantitative roles in different time and space need further investigation. During the largest storm over the past decade (17 March 2015), relativistic electrons experienced fairly rapid acceleration up to ~7 MeV within 2 days after an initial substantial dropout, as observed by Van Allen Probes. In the present paper, we evaluate the relative roles of various physical processes during the recovery phase of this large storm using a 3-D diffusion simulation. By quantitatively comparing the observed and simulated electron evolution, we found that chorus plays a critical role in accelerating electrons up to several MeV near the developing peak location and produces characteristic flat-top pitch angle distributions. By only including radial diffusion, the simulation underestimates the observed electron acceleration, while radial diffusion plays an important role in redistributing electrons and potentially accelerates them to even higher energies. Moreover, plasmaspheric hiss is found to provide efficient pitch angle scattering losses for hundreds of keV electrons, while its scattering effect on > 1 MeV electrons is relatively slow. Although an additional loss process is required to fully explain the overestimated electron fluxes at multi-MeV, the combined physical processes of radial diffusion and pitch angle and energy diffusion by chorus and hiss reproduce the observed electron dynamics remarkably well, suggesting that quasi-linear diffusion theory is reasonable to evaluate radiation belt electron dynamics during this big storm.
MAGNETIC-ISLAND CONTRACTION AND PARTICLE ACCELERATION IN SIMULATED ERUPTIVE SOLAR FLARES
Guidoni, S. E.; DeVore, C. R.; Karpen, J. T.; Lynch, B. J.
2016-03-20
The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission in solar flares is not well understood. Drake et al. proposed a mechanism for accelerating electrons in contracting magnetic islands formed by kinetic reconnection in multi-layered current sheets (CSs). We apply these ideas to sunward-moving flux ropes (2.5D magnetic islands) formed during fast reconnection in a simulated eruptive flare. A simple analytic model is used to calculate the energy gain of particles orbiting the field lines of the contracting magnetic islands in our ultrahigh-resolution 2.5D numerical simulation. We find that the estimated energy gains in a single island range up to a factor of five. This is higher than that found by Drake et al. for islands in the terrestrial magnetosphere and at the heliopause, due to strong plasma compression that occurs at the flare CS. In order to increase their energy by two orders of magnitude and plausibly account for the observed high-energy flare emission, the electrons must visit multiple contracting islands. This mechanism should produce sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each magnetohydrodynamic-scale island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare CS is a promising candidate for electron acceleration in solar eruptions.
Zhang, Qing Hang; Tan, Soon Huat; Teo, Ee Chon
2008-07-01
The information on the variation of ligament strains over time after rear impact has been seldom investigated. In the current study, a detailed three-dimensional C0-C7 finite element model of the whole head-neck complex developed previously was modified to include T1 vertebra. Rear impact of half sine-pulses with peak values of 3.5g, 5g, 6.5g and 8g respectively were applied to the inferior surface of the T1 vertebral body to validate the simulated variations of the intervertebral segmental rotations and to investigate the ligament tensions of the cervical spine under different levels of accelerations. The simulated kinematics of the head-neck complex showed relatively good agreement with the experimental data with most of the predicted peak values falling within one standard deviation of the experimental data. Under rear impact, the whole C0-T1 structure formed an S-shaped curvature with flexion at the upper levels and extension at the lower levels at early stage after impact, during which the lower cervical levels might experience hyperextensions. The predicted high resultant strain of the capsular ligaments, even at low impact acceleration compared with other ligament groups, suggests their susceptibility to injury. The peak impact acceleration has a significant effect on the potential injury of ligaments. Under higher accelerations, most ligaments will reach failure strain in a much shorter time immediately after impact.
Particle-in-cell Simulation of Electron Acceleration in Solar Coronal Jets
NASA Astrophysics Data System (ADS)
Baumann, G.; Nordlund, Å.
2012-11-01
We investigate electron acceleration resulting from three-dimensional magnetic reconnection between an emerging, twisted magnetic flux rope and a pre-existing weak, open magnetic field. We first follow the rise of an unstable, twisted flux tube with a resistive MHD simulation where the numerical resolution is enhanced by using fixed mesh refinement. As in previous MHD investigations of similar situations, the rise of the flux tube into the pre-existing inclined coronal magnetic field results in the formation of a solar coronal jet. A snapshot of the MHD model is then used as an initial and boundary condition for a particle-in-cell simulation, using up to half a billion cells and over 20 billion charged particles. Particle acceleration occurs mainly in the reconnection current sheet, with accelerated electrons displaying a power law in the energy probability distribution with an index of around -1.5. The main acceleration mechanism is a systematic electric field, striving to maintaining the electric current in the current sheet against losses caused by electrons not being able to stay in the current sheet for more than a few seconds at a time.
Simulation of Cosmic Ray Acceleration, Propagation and Interaction in SNR Environment
NASA Astrophysics Data System (ADS)
Lee, S. H.; Kamae, T.; Ellison, D. C.
2007-07-01
Recent studies of young supernova remnants (SNRs) with Chandra, XMM, Suzaku and HESS have revealed complex morphologies and spectral features of the emission sites. The critical question of the relative importance of the two competing gamma-ray emission mechanisms in SNRs; inverse-Compton scattering by high-energy electrons and pion production by energetic protons, may be resolved by GLAST-LAT. To keep pace with the improved observations, we are developing a 3D model of particle acceleration, diffusion, and interaction in a SNR where broad-band emission from radio to multi-TeV energies, produced by shock accelerated electrons and ions, can be simulated for a given topology of shock fronts, magnetic field, and ISM densities. The 3D model takes as input, the particle spectra predicted by a hydrodynamic simulation of SNR evolution where nonlinear diffusive shock acceleration is coupled to the remnant dynamics (e.g., Ellison, Decourchelle & Ballet; Ellison & Cassam-Chenai Ellison, Berezhko & Baring). We will present preliminary models of the Galactic Ridge SNR RX J1713-3946 for selected choices of SNR parameters, magnetic field topology, and ISM density distributions. When constrained by broad-band observations, our models should predict the extent of coupling between spectral shape and morphology and provide direct information on the acceleration efficiency of cosmic-ray electrons and ions in SNRs.
Magnetic-Island Contraction and Particle Acceleration in Simulated Eruptive Solar Flares
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Devore, C. R.; Karpen, J. T.; Lynch, B. J.
2016-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission in solar flares is not well understood. Drake et al. proposed a mechanism for accelerating electrons in contracting magnetic islands formed by kinetic reconnection in multi-layered current sheets (CSs). We apply these ideas to sunward-moving flux ropes (2.5D magnetic islands) formed during fast reconnection in a simulated eruptive flare. A simple analytic model is used to calculate the energy gain of particles orbiting the field lines of the contracting magnetic islands in our ultrahigh-resolution 2.5D numerical simulation. We find that the estimated energy gains in a single island range up to a factor of five. This is higher than that found by Drake et al. for islands in the terrestrial magnetosphere and at the heliopause, due to strong plasma compression that occurs at the flare CS. In order to increase their energy by two orders of magnitude and plausibly account for the observed high-energy flare emission, the electrons must visit multiple contracting islands. This mechanism should produce sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each magneto hydro dynamic-scale island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare CS is a promising candidate for electron acceleration in solar eruptions.
Magnetic-island Contraction and Particle Acceleration in Simulated Eruptive Solar Flares
NASA Astrophysics Data System (ADS)
Guidoni, S. E.; DeVore, C. R.; Karpen, J. T.; Lynch, B. J.
2016-03-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission in solar flares is not well understood. Drake et al. proposed a mechanism for accelerating electrons in contracting magnetic islands formed by kinetic reconnection in multi-layered current sheets (CSs). We apply these ideas to sunward-moving flux ropes (2.5D magnetic islands) formed during fast reconnection in a simulated eruptive flare. A simple analytic model is used to calculate the energy gain of particles orbiting the field lines of the contracting magnetic islands in our ultrahigh-resolution 2.5D numerical simulation. We find that the estimated energy gains in a single island range up to a factor of five. This is higher than that found by Drake et al. for islands in the terrestrial magnetosphere and at the heliopause, due to strong plasma compression that occurs at the flare CS. In order to increase their energy by two orders of magnitude and plausibly account for the observed high-energy flare emission, the electrons must visit multiple contracting islands. This mechanism should produce sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each magnetohydrodynamic-scale island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare CS is a promising candidate for electron acceleration in solar eruptions.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Li, Yadong; Liu, Jingxiao; Shi, Fei; Tang, Nailing; Yu, Ling
2007-12-01
In the present work, NiTi alloy substrates were activated by three different pretreatment processes. 5 X SBF1 and 5 X SBF2 concentrated simulated body fluids were prepared with citric acid buffer reagent, and then calcium phosphate coatings were formed quickly on NiTi alloy surface by accelerated biomimetic synthesis after pretreatment. The microstructure, composition and surface morphology of calcium phosphate coatings were studied. The results indicate that calcium phosphate coatings possess porous and net structure, which are composed of precipitated spherical particles with diameter less than 3 microm. The analysis of XRD shows that the main component of calcium phosphate coatings is hydroxyapatite, whereas the concentrated 5 x SBF simulated body fluid, which is in the absence of Mg2+ and HCO3- crystal growth inhibitors, apparently accelerates the growth rate of hydroxyapatite coatings.
Forced detection Monte Carlo algorithms for accelerated blood vessel image simulations.
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2009-03-01
Two forced detection (FD) variance reduction Monte Carlo algorithms for image simulations of tissue-embedded objects with matched refractive index are presented. The principle of the algorithms is to force a fraction of the photon weight to the detector at each and every scattering event. The fractional weight is given by the probability for the photon to reach the detector without further interactions. Two imaging setups are applied to a tissue model including blood vessels, where the FD algorithms produce identical results as traditional brute force simulations, while being accelerated with two orders of magnitude. Extending the methods to include refraction mismatches is discussed.
Qiao, Wei; McLennan, Michael; Kennell, Rick; Ebert, David S; Klimeck, Gerhard
2006-01-01
The Network for Computational Nanotechnology (NCN) has developed a science gateway at nanoHUB.org for nanotechnology education and research. Remote users can browse through online seminars and courses, and launch sophisticated nanotechnology simulation tools, all within their web browser. Simulations are supported by a middleware that can route complex jobs to grid supercomputing resources. But what is truly unique about the middleware is the way that it uses hardware accelerated graphics to support both problem setup and result visualization. This paper describes the design and integration of a remote visualization framework into the nanoHUB for interactive visual analytics of nanotechnology simulations. Our services flexibly handle a variety of nanoscience simulations, render them utilizing graphics hardware acceleration in a scalable manner, and deliver them seamlessly through the middleware to the user. Rendering is done only on-demand, as needed, so each graphics hardware unit can simultaneously support many user sessions. Additionally, a novel node distribution scheme further improves our system's scalability. Our approach is not only efficient but also cost-effective. Only a half-dozen render nodes are anticipated to support hundreds of active tool sessions on the nanoHUB. Moreover, this architecture and visual analytics environment provides capabilities that can serve many areas of scientific simulation and analysis beyond nanotechnology with its ability to interactively analyze and visualize multivariate scalar and vector fields.
Testing the paradigms of the glass transition in colloids via dynamic simulation
NASA Astrophysics Data System (ADS)
Wang, Jialun; Peng, Xiaoguang; Li, Qi; McKenna, Gregory; Zia, Roseanna
2016-11-01
Upon cooling, molecular glass-formers undergo a glass transition during which viscosity appears to diverge, and the material transitions from a liquid to an amorphous solid. However, the new state is not an equilibrium phase: material properties such as enthalpy continue to evolve in time. Rather, the material evolves toward an "intransient" state, as measured by the Kovacs signature experiments, e.g. the intrinsic isotherm, which reveals a paradoxical dependence of transition time on quench depth, and suggests that whether the glass transition occurs at the beginning or end of this transition is an open question. Colloidal glass formers provide a natural way to model such behavior, owing to the disparity in time scales that allow tracking of particle dynamics. We interrogate these ideas via dynamic simulation of a hard-sphere colloidal glassy state induced by jumps in volume fraction. We explore three methods to model the jump: evaporation, aspiration, and particle-size jumps. During and following each jump, the positions, velocities, and particle-phase stress are tracked and utilized to characterize relaxation time scales and structural changes. Analogs for the intrinsic isotherms are developed. The results provide insight into the existence of an "ideal" glass transition.
Simulations of ion acceleration at non-relativistic shocks. II. Magnetic field amplification
Caprioli, D.; Spitkovsky, A.
2014-10-10
We use large hybrid simulations to study ion acceleration and generation of magnetic turbulence due to the streaming of particles that are self-consistently accelerated at non-relativistic shocks. When acceleration is efficient, we find that the upstream magnetic field is significantly amplified. The total amplification factor is larger than 10 for shocks with Alfvénic Mach number M = 100, and scales with the square root of M. The spectral energy density of excited magnetic turbulence is determined by the energy distribution of accelerated particles, and for moderately strong shocks (M ≲ 30) agrees well with the prediction of resonant streaming instability, in the framework of quasilinear theory of diffusive shock acceleration. For M ≳ 30, instead, Bell's non-resonant hybrid (NRH) instability is predicted and found to grow faster than resonant instability. NRH modes are excited far upstream by escaping particles, and initially grow without disrupting the current, their typical wavelengths being much shorter than the current ions' gyroradii. Then, in the nonlinear stage, most unstable modes migrate to larger and larger wavelengths, eventually becoming resonant in wavelength with the driving ions, which start diffuse. Ahead of strong shocks we distinguish two regions, separated by the free-escape boundary: the far upstream, where field amplification is provided by the current of escaping ions via NRH instability, and the shock precursor, where energetic particles are effectively magnetized, and field amplification is provided by the current in diffusing ions. The presented scalings of magnetic field amplification enable the inclusion of self-consistent microphysics into phenomenological models of ion acceleration at non-relativistic shocks.
Automated detection and analysis of particle beams in laser-plasma accelerator simulations
Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.; Bethel, E. Wes; Jacobsen, J.; Prabhat, ,; R.ubel, O.; Weber, G,; Hamann, B.
2010-05-21
Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread in momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for
NASA Astrophysics Data System (ADS)
Dudnikova, Galina; Malkov, Mikhail; Sagdeev, Roald; Liseykina, Tatjana; Hanusch, Adrian
2016-10-01
Elemental composition of galactic cosmic rays (CR) probably holds the key to their origin. Most likely, they are accelerated at collisionless shocks in supernova remnants, but the acceleration mechanism is not entirely understood. One complicated problem is ``injection'', a process whereby the shock selects a tiny fraction of particles to keep on crossing its front and gain more energy. Comparing the injection rates of particles with different mass to charge ratio is a powerful tool for studying this process. Recent advances in measurements of CR He/p ratio have provided particularly important new clues. We performed a series of hybrid simulations and analyzed a joint injection of protons and Helium, in conjunction with upstream waves they generate. The emphasis of this work is on the bootstrap aspects of injection manifested in particle confinement to the shock and, therefore, their continuing acceleration by the self-driven waves. The waves are initially generated by He and protons in separate spectral regions, and their interaction plays a crucial role in particle acceleration. The work is ongoing and new results will be reported along with their analysis and comparison with the latest data from the AMS-02 space-based spectrometer. Work supported Grant RFBR 16-01-00209, NASA ATP-program under Award NNX14AH36G, and by the US Department of Energy under Award No. DE-FG02-04ER54738.
NASA Astrophysics Data System (ADS)
Li, Yuan; Ma, Yu; Li, Juan; Jiang, Xiaoming; Hu, Wenbing
2012-03-01
We report dynamic Monte Carlo simulations of microphase separated diblock copolymers, to investigate how crystallization of one species could accelerate the subsequent crystallization of another species. Although the lattice copolymer model brings a boundary constraint to the long periods of microdomains, the single-molecular-level force balance between two blocks and its change can be revealed in this simple approach. We found two contrastable acceleration mechanisms: (1) the metastable lamellar crystals of one species become thicker at higher crystallization temperatures, sacrificing its microphase interfacial area to make a larger coil-stretching of another amorphous species and hence to accelerate subsequent crystallization of the latter with a more favorable conformation. (2) While in the case allowing chain-sliding in the crystal, the equilibrated lamellar crystals of one species become thinner at higher temperatures, sacrificing its thermal stability to gain a higher conformational entropy of another amorphous species and hence to accelerate subsequent crystallization of the latter with a stronger tension at the block junction. Parallel situations of experiments have been discussed.
GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac
2017-03-01
The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.
Kim, Myung-Hee Y; Rusek, Adam; Cucinotta, Francis A
2015-01-01
For radiobiology research on the health risks of galactic cosmic rays (GCR) ground-based accelerators have been used with mono-energetic beams of single high charge, Z and energy, E (HZE) particles. In this paper, we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL), we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model) to define a GCR reference field using 9 HZE particle beam-energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and (4)He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However, a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation, with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving knowledge of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology research.
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.
2015-01-01
For radiobiology research on the health risks of galactic cosmic rays (GCR) ground-based accelerators have been used with mono-energetic beams of single high charge, Z and energy, E (HZE) particles. In this paper, we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL), we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model) to define a GCR reference field using 9 HZE particle beam–energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and 4He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However, a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation, with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving knowledge of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology research. PMID:26090339
Dijk, W. van; Corstens, J. M.; Stragier, X. F. D.; Brussaard, G. J. H.; Geer, S. B. van der
2009-01-22
One of the most compelling reasons to use external injection of electrons into a laser wakefield accelerator is to improve the stability and reproducibility of the accelerated electrons. We have built a simulation tool based on particle tracking to investigate the expected output parameters. Specifically, we are simulating the variations in energy and bunch charge under the influence of variations in laser power and timing jitter. In these simulations a a{sub 0} = 0.32 to a{sub 0} = 1.02 laser pulse with 10% shot-to-shot energy fluctuation is focused into a plasma waveguide with a density of 1.0x10{sup 24} m{sup -3} and a calculated matched spot size of 50.2 {mu}m. The timing of the injected electron bunch with respect to the laser pulse is varied from up to 1 ps from the standard timing (1 ps ahead or behind the laser pulse, depending on the regime). The simulation method and first results will be presented. Shortcomings and possible extensions to the model will be discussed.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
Measurements and simulations of wakefields at the Accelerator Test Facility 2
NASA Astrophysics Data System (ADS)
Snuverink, J.; Ainsworth, R.; Boogert, S. T.; Cullinan, F. J.; Lyapin, A.; Kim, Y. I.; Kubo, K.; Kuroda, S.; Okugi, T.; Tauchi, T.; Terunuma, N.; Urakawa, J.; White, G. R.
2016-09-01
Wakefields are an important factor in accelerator design, and are a real concern when preserving the low beam emittance in modern machines. Charge dependent beam size growth has been observed at the Accelerator Test Facility (ATF2), a test accelerator for future linear collider beam delivery systems. Part of the explanation of this beam size growth is wakefields. In this paper we present numerical calculations of the wakefields produced by several types of geometrical discontinuities in the beam line as well as tracking simulations to estimate the induced effects. We also discuss precision beam kick measurements performed with the ATF2 cavity beam position monitor system for a test wakefield source in a movable section of the vacuum chamber. Using an improved model independent method we measured a wakefield kick for this movable section of about 0.49 V /pC /mm , which, compared to the calculated value from electromagnetic simulations of 0.41 V /pC /mm , is within the systematic error.
NASA Technical Reports Server (NTRS)
Brody, Adam R.; Ellis, Stephen R.
1992-01-01
Nine commercial airline pilots served as test subjects in a study to compare acceleration control with pulse control in simulated spacecraft maneuvers. Simulated remote dockings of an orbital maneuvering vehicle (OMV) to a space station were initiated from 50, 100, and 150 meters along the station's -V-bar (minus velocity vector). All unsuccessful missions were reflown. Five way mixed analysis of variance (ANOVA) with one between factor, first mode, and four within factors (mode, bloch, range, and trial) were performed on the data. Recorded performance measures included mission duration and fuel consumption along each of the three coordinate axes. Mission duration was lower with pulse mode, while delta V (fuel consumption) was lower with acceleration mode. Subjects used more fuel to travel faster with pulse mode than with acceleration mode. Mission duration, delta V, X delta V, Y delta V., and Z delta V all increased with range. Subjects commanded the OMV to 'fly' at faster rates from further distances. These higher average velocities were paid for with increased fuel consumption. Asymmetrical transfer was found in that the mode transitions could not be predicted solely from the mission duration main effect. More testing is advised to understand the manual control aspects of spaceflight maneuvers better.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Wen, Lihua; Naboulsi, Sam; Eason, Thomas; Vasudevan, Vijay K.; Qian, Dong
2016-08-01
A multiscale space-time finite element method based on time-discontinuous Galerkin and enrichment approach is presented in this work with a focus on improving the computational efficiencies for high cycle fatigue simulations. While the robustness of the TDG-based space-time method has been extensively demonstrated, a critical barrier for the extensive application is the large computational cost due to the additional temporal dimension and enrichment that are introduced. The present implementation focuses on two aspects: firstly, a preconditioned iterative solver is developed along with techniques for optimizing the matrix storage and operations. Secondly, parallel algorithms based on multi-core graphics processing unit are established to accelerate the progressive damage model implementation. It is shown that the computing time and memory from the accelerated space-time implementation scale with the number of degree of freedom N through ˜ O(N^{1.6}) and ˜ O(N), respectively. Finally, we demonstrate the accelerated space-time FEM simulation through benchmark problems.
NASA Astrophysics Data System (ADS)
Mori, Warren B.
2015-11-01
Computer simulations have been an integral part of plasma physics research since the early 1960s. Initially, they provided the ability to confirm and test linear and nonlinear theories in one-dimension. As simulation capabilities and computational power improved, then simulations were also used to test new ideas and applications of plasmas in multi-dimensions. As progress continued, simulations were also used to model experiments. Today computer simulations of plasmas are ubiquitously used to test new theories, understand complicated nonlinear phenomenon, model the full temporal and spatial scale of experiments, simulate parameters beyond the reach of current experiments, and test the performance of new devices before large capital expenditures are made to build them. In this talk I review the progress in simulations in a particular area of plasma physics: plasma based acceleration (PBA). In PBA a short laser pulse or particle beam propagates through long regions of plasma creating plasma wave wakefields on which electrons or positrons surf to high energies. In some cases the wakefields are highly nonlinear, involve three-dimensional effects, and the trajectories of plasma particles cross making it essential that fully kinetic and three-dimensional models are used. I will show how particle-in-cell (PIC) simulations were initially used to propose the basic idea of PBA in one dimension. I will review some of the dramatic progress in the experimental demonstration of PBA and show how this progress was dramatically helped by a synergy between experiments and full-scale multi-dimensional PIC simulations. This will include a review of how the capability of PIC simulation tools has improved. I will also touch on some recent progress on improvements to PIC simulations of PBA and discuss how these improvements may push the synergy further towards real time steering of experiments and start to end modeling of key components of a future linear collider or XFEL based on PBA
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
Curtis, J.H.; Michelotti, M.D.; Riemer, N.; Heath, M.T.; West, M.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removal rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.
NASA Astrophysics Data System (ADS)
Le Grand, Scott; Götz, Andreas W.; Walker, Ross C.
2013-02-01
A new precision model is proposed for the acceleration of all-atom classical molecular dynamics (MD) simulations on graphics processing units (GPUs). This precision model replaces double precision arithmetic with fixed point integer arithmetic for the accumulation of force components as compared to a previously introduced model that uses mixed single/double precision arithmetic. This significantly boosts performance on modern GPU hardware without sacrificing numerical accuracy. We present an implementation for NVIDIA GPUs of both generalized Born implicit solvent simulations as well as explicit solvent simulations using the particle mesh Ewald (PME) algorithm for long-range electrostatics using this precision model. Tests demonstrate both the performance of this implementation as well as its numerical stability for constant energy and constant temperature biomolecular MD as compared to a double precision CPU implementation and double and mixed single/double precision GPU implementations.
Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.
Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael; Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl
2011-09-01
Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.
Experimental validation of neutron activation simulation of a varian medical linear accelerator.
Morato, S; Juste, B; Miro, R; Verdu, G; Diez, S
2016-08-01
This work presents a Monte Carlo simulation using the last version of MCNP, v. 6.1.1, of a Varian CLinAc emitting a 15MeV photon beam. The main objective of the work is to estimate the photoneutron production and activated products inside the medical linear accelerator head. To that, the Varian LinAc head was modelled in detail using the manufacturer information, and the model was generated with a CAD software and exported as a mesh to be included in the particle transport simulation. The model includes the transport of photoneutrons generated by primary photons and the (n, γ) reactions which can result in activation products. The validation of this study was done using experimental measures. Activation products have been identified by in situ gamma spectroscopy placed at the jaws exit of the LinAc shortly after termination of a high energy photon beam irradiation. Comparison between experimental and simulation results shows good agreement.
3-D Simulations of Plasma Wakefield Acceleration with Non-Idealized Plasmas and Beams
Deng, S.; Katsouleas, T.; Lee, S.; Muggli, P.; Mori, W.B.; Hemker, R.; Ren, C.; Huang, C.; Dodd, E.; Blue, B.E.; Clayton, C.E.; Joshi, C.; Wang, S.; Decker, F.J.; Hogan, M.J.; Iverson, R.H.; O'Connell, C.; Raimondi, P.; Walz, D.; /SLAC
2005-09-27
3-D Particle-in-cell OSIRIS simulations of the current E-162 Plasma Wakefield Accelerator Experiment are presented in which a number of non-ideal conditions are modeled simultaneously. These include tilts on the beam in both planes, asymmetric beam emittance, beam energy spread and plasma inhomogeneities both longitudinally and transverse to the beam axis. The relative importance of the non-ideal conditions is discussed and a worst case estimate of the effect of these on energy gain is obtained. The simulation output is then propagated through the downstream optics, drift spaces and apertures leading to the experimental diagnostics to provide insight into the differences between actual beam conditions and what is measured. The work represents a milestone in the level of detail of simulation comparisons to plasma experiments.
Monte Carlo simulation of a medical linear accelerator for radiotherapy use.
Serrano, B; Hachem, A; Franchisseur, E; Hérault, J; Marcié, S; Costa, A; Bensadoun, R J; Barthe, J; Gérard, J P
2006-01-01
A Monte Carlo code MCNPX (Monte Carlo N-particle) was used to model a 25 MV photon beam from a PRIMUS (KD2-Siemens) medical linear electron accelerator at the Centre Antoine Lacassagne in Nice. The entire geometry including the accelerator head and the water phantom was simulated to calculate the dose profile and the relative depth-dose distribution. The measurements were done using an ionisation chamber in water for different square field ranges. The first results show that the mean electron beam energy is not 19 MeV as mentioned by Siemens. The adjustment between the Monte Carlo calculated and measured data is obtained when the mean electron beam energy is approximately 15 MeV. These encouraging results will permit to check calculation data given by the treatment planning system, especially for small fields in high gradient heterogeneous zones, typical for intensity modulated radiation therapy technique.
NASA Astrophysics Data System (ADS)
Kluchevskaya, Y. D.; Polozov, S. M.
2016-07-01
It was proposed to develop the biperiodical accelerating structure with operating frequency of 27 GHz to assess the possibility of design a compact accelerating structure for medical application. It is necessary to do the more careful simulation of variation characteristics this case because of decrease of wavelength 3-10 times in comparison with conventional structures 10 and 3 cm ranges. Results of such study are presented in the article. Also a combination of high electromagnetic fields and long pulses at a high operating frequency leads to the temperature increase in the structure, thermal deformation and significant change of the resonator characteristics, including the frequency of the RF pulse. Development results of three versions of system of temperature stabilization also discuses.
Numerical simulations of Hall-effect plasma accelerators on a magnetic-field-aligned mesh.
Mikellides, Ioannis G; Katz, Ira
2012-10-01
The ionized gas in Hall-effect plasma accelerators spans a wide range of spatial and temporal scales, and exhibits diverse physics some of which remain elusive even after decades of research. Inside the acceleration channel a quasiradial applied magnetic field impedes the current of electrons perpendicular to it in favor of a significant component in the E×B direction. Ions are unmagnetized and, arguably, of wide collisional mean free paths. Collisions between the atomic species are rare. This paper reports on a computational approach that solves numerically the 2D axisymmetric vector form of Ohm's law with no assumptions regarding the resistance to classical electron transport in the parallel relative to the perpendicular direction. The numerical challenges related to the large disparity of the transport coefficients in the two directions are met by solving the equations on a computational mesh that is aligned with the applied magnetic field. This approach allows for a large physical domain that extends more than five times the thruster channel length in the axial direction and encompasses the cathode boundary where the lines of force can become nonisothermal. It also allows for the self-consistent solution of the plasma conservation laws near the anode boundary, and for simulations in accelerators with complex magnetic field topologies. Ions are treated as an isothermal, cold (relative to the electrons) fluid, accounting for the ion drag in the momentum equation due to ion-neutral (charge-exchange) and ion-ion collisions. The density of the atomic species is determined using an algorithm that eliminates the statistical noise associated with discrete-particle methods. Numerical simulations are presented that illustrate the impact of the above-mentioned features on our understanding of the plasma in these accelerators.
Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang, Steve B.; Jia, Xun
2017-04-01
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1–197.2.
A 3D MPI-Parallel GPU-accelerated framework for simulating ocean wave energy converters
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Raessi, Mehdi
2015-11-01
We present an MPI-parallel GPU-accelerated computational framework for studying the interaction between ocean waves and wave energy converters (WECs). The computational framework captures the viscous effects, nonlinear fluid-structure interaction (FSI), and breaking of waves around the structure, which cannot be captured in many potential flow solvers commonly used for WEC simulations. The full Navier-Stokes equations are solved using the two-step projection method, which is accelerated by porting the pressure Poisson equation to GPUs. The FSI is captured using the numerically stable fictitious domain method. A novel three-phase interface reconstruction algorithm is used to resolve three phases in a VOF-PLIC context. A consistent mass and momentum transport approach enables simulations at high density ratios. The accuracy of the overall framework is demonstrated via an array of test cases. Numerical simulations of the interaction between ocean waves and WECs are presented. Funding from the National Science Foundation CBET-1236462 grant is gratefully acknowledged.
GeNN: a code generation framework for accelerated brain simulations
NASA Astrophysics Data System (ADS)
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
Stable boosted-frame simulations of laser-wakefield acceleration using Galilean coordinates
NASA Astrophysics Data System (ADS)
Lehe, Remi; Kirchen, Manuel; Godfrey, Brendan; Maier, Andreas; Vay, Jean-Luc
2016-10-01
While Particle-In-Cell (PIC) simulations of laser-wakefield acceleration are typically very computationally expensive, it is well-known that representing the system in a well-chosen Lorentz frame can reduce the computational cost by orders of magnitude. One of the limitation of this ``boosted-frame'' technique is the Numerical Cherenkov Instability (NCI) - a numerical instability that rapidly grows in the boosted frame and must be eliminated in order to obtain valid physical results. Several methods have been proposed in order to eliminate the NCI, but they introduce additional numerical corrections (e.g. heavy smoothing, unphysical modification of the dispersion relation, etc.) which could potentially alter the physics. By contrast, here we show that, for boosted-frame simulations of laser-wakefield acceleration, the NCI can be eliminated simply by integrating the PIC equations in Galilean coordinates (a.k.a comoving coordinates), without additional numerical correction. Using this technique, we show excellent agreement between simulations in the laboratory frame and Lorentz-boosted frame, with more than 2 orders of magnitude speedup in the latter case. Work supported by US-DOE Contracts DE-AC02-05CH11231.
A grid computing-based approach for the acceleration of simulations in cardiology.
Alonso, José M; Ferrero, José M; Hernández, Vicente; Moltó, Germán; Saiz, Javier; Trénor, Beatriz
2008-03-01
This paper combines high-performance computing and grid computing technologies to accelerate multiple executions of a biomedical application that simulates the action potential propagation on cardiac tissues. First, a parallelization strategy was employed to accelerate the execution of simulations on a cluster of personal computers (PCs). Then, grid computing was employed to concurrently perform the multiple simulations that compose the cardiac case studies on the resources of a grid deployment, by means of a service-oriented approach. This way, biomedical experts are provided with a gateway to easily access a grid infrastructure for the execution of these research studies. Emphasis is stressed on the methodology employed. In order to assess the benefits of the grid, a cardiac case study, which analyzes the effects of premature stimulation on reentry generation during myocardial ischemia, has been carried out. The collaborative usage of a distributed computing infrastructure has reduced the time required for the execution of cardiac case studies, which allows, for example, to take more accurate decisions when evaluating the effects of new antiarrhythmic drugs on the electrical activity of the heart.
The use of computed radiography for routine linear accelerator and simulator quality control.
Patel, I; Natarajan, T; Hassan, S S; Kirby, M C
2009-10-01
Computed radiography (CR) systems were originally developed for the purpose of clinical imaging, and there has been much work published on its effectiveness as a film replacement for this end. However, there has been little published on its use for routine linear accelerator and simulator quality control, and therefore we have evaluated the use of the Kodak 2000RT system with large Agfa CR plates as a replacement for film for this function. A prerequisite for any such use is a detailed understanding of the system behaviour, hence characteristics such as spatial uniformity of response, reproducibility of spatial accuracy, plate signal decay with time and the dose-response of plates were investigated. Finally, a comparison of results obtained using CR for the measurement of radiation field dimensions was made against those from radiographic film, and found to be in agreement within 0.1 mm (mean difference for high-resolution images, 0.3 mm root mean square difference) for megavoltage images and 0.3 mm (maximum difference) for simulator images. In conclusion, the CR system has been shown to be a good alternative to radiographic film for routine quality control of linear accelerators and simulators.
Clouds and Precipitation Simulated by the US DOE Accelerated Climate Modeling for Energy (ACME)
NASA Astrophysics Data System (ADS)
Xie, S.; Lin, W.; Yoon, J. H.; Ma, P. L.; Rasch, P. J.; Ghan, S.; Zhang, K.; Zhang, Y.; Zhang, C.; Bogenschutz, P.; Gettelman, A.; Larson, V. E.; Neale, R. B.; Park, S.; Zhang, G. J.
2015-12-01
A new US Department of Energy (DOE) climate modeling effort is to develop an Accelerated Climate Model for Energy (ACME) to accelerate the development and application of fully coupled, state-of-the-art Earth system models for scientific and energy application. ACME is a high-resolution climate model with a 0.25 degree in horizontal and more than 60 levels in the vertical. It starts from the Community Earth System Model (CESM) with notable changes to its physical parameterizations and other components. This presentation provides an overview on the ACME model's capability in simulating clouds and precipitation and its sensitivity to convection schemes. Results with using several state-of-the-art cumulus convection schemes, including those unified parameterizations that are being developed in the climate community, will be presented. These convection schemes are evaluated in a multi-scale framework including both short-range hindcasts and free-running climate simulations with both satellite data and ground-based measurements. Running climate model in short-range hindcasts has been proven to be an efficient way to understand model deficiencies. The analysis is focused on those systematic errors in clouds and precipitation simulations that are shared in many climate models. The goal is to understand what model deficiencies might be primarily responsible for these systematic errors.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Contact detection acceleration in pebble flow simulation for pebble bed reactor systems
Li, Y.; Ji, W.
2013-07-01
Pebble flow simulation plays an important role in the steady state and transient analysis of thermal-hydraulics and neutronics for Pebble Bed Reactors (PBR). The Discrete Element Method (DEM) and the modified Molecular Dynamics (MD) method are widely used to simulate the pebble motion to obtain the distribution of pebble concentration, velocity, and maximum contact stress. Although DEM and MD present high accuracy in the pebble flow simulation, they are quite computationally expensive due to the large quantity of pebbles to be simulated in a typical PBR and the ubiquitous contacts and collisions between neighboring pebbles that need to be detected frequently in the simulation, which greatly restricted their applicability for large scale PBR designs such as PBMR400. Since the contact detection accounts for more than 60% of the overall CPU time in the pebble flow simulation, the acceleration of the contact detection can greatly enhance the overall efficiency. In the present work, based on the design features of PBRs, two contact detection algorithms, the basic cell search algorithm and the bounding box search algorithm are investigated and applied to pebble contact detection. The influence from the PBR system size, core geometry and the searching cell size on the contact detection efficiency is presented. Our results suggest that for present PBR applications, the bounding box algorithm is less sensitive to the aforementioned effects and has superior performance in pebble contact detection compared with basic cell search algorithm. (authors)
Mixed-field GCR Simulations for Radiobiological Research Using Ground Based Accelerators
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.
2014-01-01
Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20% accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.
Mixed-field GCR Simulations for Radiobiological Research using Ground Based Accelerators
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis
Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20 percents accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.
High energy gain in three-dimensional simulations of light sail acceleration
Sgattoni, A.; Sinigardi, S.; Macchi, A.
2014-08-25
The dynamics of radiation pressure acceleration in the relativistic light sail regime are analysed by means of large scale, three-dimensional (3D) particle-in-cell simulations. Differently to other mechanisms, the 3D dynamics leads to faster and higher energy gain than in 1D or 2D geometry. This effect is caused by the local decrease of the target density due to transverse expansion leading to a “lighter sail.” However, the rarefaction of the target leads to an earlier transition to transparency limiting the energy gain. A transverse instability leads to a structured and inhomogeneous ion distribution.
Halavanau, A.; Piot, P.
2015-12-01
Cascaded Longitudinal Space Charge Amplifiers (LSCA) have been proposed as a mechanism to generate density modulation over a board spectral range. The scheme has been recently demonstrated in the optical regime and has confirmed the production of broadband optical radiation. In this paper we investigate, via numerical simulations, the performance of a cascaded LSCA beamline at the Fermilab Accelerator Science & Technology (FAST) facility to produce broadband ultraviolet radiation. Our studies are carried out using elegant with included tree-based grid-less space charge algorithm.
Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.
Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S
2016-01-01
Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays.
Neutron activation processes simulation in an Elekta medical linear accelerator head.
Juste, B; Miró, R; Verdú, G; Díez, S; Campayo, J M
2014-01-01
Monte Carlo estimation of the giant-dipole-resonance (GRN) photoneutrons inside the Elekta Precise LINAC head (emitting a 15 MV photon beam) were performed using the MCNP6 (general-purpose Monte Carlo N-Particle code, version 6). Each component of LINAC head geometry and materials were modelled in detail using the given manufacturer information. Primary photons generate photoneutrons and its transport across the treatment head was simulated, including the (n, γ) reactions which undergo activation products. The MCNP6 was used to develop a method for quantifying the activation of accelerator components. The approach described in this paper is useful in quantifying the origin and the amount of nuclear activation.
Simulations of flame acceleration and deflagration-to-detonation transitions in methane-air systems
Kessler, D.A.; Gamezo, V.N.; Oran, E.S.
2010-11-15
Flame acceleration and deflagration-to-detonation transitions (DDT) in large obstructed channels filled with a stoichiometric methane-air mixture are simulated using a single-step reaction mechanism. The reaction parameters are calibrated using known velocities and length scales of laminar flames and detonations. Calculations of the flame dynamics and DDT in channels with obstacles are compared to previously reported experimental data. The results obtained using the simple reaction model qualitatively, and in many cases, quantitatively match the experiments and are found to be largely insensitive to small variations in model parameters. (author)
Simulations of a High-Transformer-Ratio Plasma Wakefield Accelerator Using Multiple Electron Bunches
Kallos, Efthymios; Muggli, Patric; Katsouleas, Thomas; Yakimenko, Vitaly; Park, Jangho
2009-01-22
Particle-in-cell simulations of a plasma wakefield accelerator in the linear regime are presented, consisting of four electron bunches that are fed into a high-density plasma. It is found that a high transformer ratio can be maintained over 43 cm of plasma if the charge in each bunch is increased linearly, the bunches are placed 1.5 plasma wavelengths apart and the bunch emmitances are adjusted to compensate for the nonlinear focusing forces. The generated wakefield is sampled by a test witness bunch whose energy gain after the plasma is six times the energy loss of the drive bunches.
Adaptive accelerated ReaxFF reactive dynamics with validation from simulating hydrogen combustion.
Cheng, Tao; Jaramillo-Botero, Andrés; Goddard, William A; Sun, Huai
2014-07-02
We develop here the methodology for dramatically accelerating the ReaxFF reactive force field based reactive molecular dynamics (RMD) simulations through use of the bond boost concept (BB), which we validate here for describing hydrogen combustion. The bond order, undercoordination, and overcoordination concepts of ReaxFF ensure that the BB correctly adapts to the instantaneous configurations in the reactive system to automatically identify the reactions appropriate to receive the bond boost. We refer to this as adaptive Accelerated ReaxFF Reactive Dynamics or aARRDyn. To validate the aARRDyn methodology, we determined the detailed sequence of reactions for hydrogen combustion with and without the BB. We validate that the kinetics and reaction mechanisms (that is the detailed sequences of reactive intermediates and their subsequent transformation to others) for H2 oxidation obtained from aARRDyn agrees well with the brute force reactive molecular dynamics (BF-RMD) at 2498 K. Using aARRDyn, we then extend our simulations to the whole range of combustion temperatures from ignition (798 K) to flame temperature (2998K), and demonstrate that, over this full temperature range, the reaction rates predicted by aARRDyn agree well with the BF-RMD values, extrapolated to lower temperatures. For the aARRDyn simulation at 798 K we find that the time period for half the H2 to form H2O product is ∼538 s, whereas the computational cost was just 1289 ps, a speed increase of ∼0.42 trillion (10(12)) over BF-RMD. In carrying out these RMD simulations we found that the ReaxFF-COH2008 version of the ReaxFF force field was not accurate for such intermediates as H3O. Consequently we reoptimized the fit to a quantum mechanics (QM) level, leading to the ReaxFF-OH2014 force field that was used in the simulations.
Sartori, E; Brescaccin, L; Serianni, G
2016-02-01
Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production-detrimental for high current negative ion systems such as beam sources for fusion-are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient and energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared.
NASA Astrophysics Data System (ADS)
Sartori, E.; Brescaccin, L.; Serianni, G.
2016-02-01
Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production—detrimental for high current negative ion systems such as beam sources for fusion—are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient and energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared.
Sartori, E. Serianni, G.; Brescaccin, L.
2016-02-15
Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production—detrimental for high current negative ion systems such as beam sources for fusion—are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient and energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared.
NASA Astrophysics Data System (ADS)
Cruz, F.; Fonseca, R. A.; Silva, L. O.; Rigby, A.; Gregori, G.; Bamford, R. A.; Bingham, R.; Koenig, M.
2016-10-01
Efficient particle acceleration in astrophysical shocks can only be achieved in the presence of initial high energy particles. A candidate mechanism to provide an initial seed of energetic particles is lower hybrid turbulence (LHT). This type of turbulence is commonly excited in regions where space and astrophysical plasmas interact with large obstacles. Due to the nature of LH waves, energy can be resonantly transferred from ions (travelling perpendicular to the magnetic field) to electrons (travelling parallel to it) and the consequent motion of the latter in turbulent shock electromagnetic fields is believed to be responsible for the observed x-ray fluxes from non-thermal electrons produced in astrophysical shocks. Here we present PIC simulations of plasma flows colliding with magnetized obstacles showing the formation of a bow shock and the consequent development of LHT. The plasma and obstacle parameters are chosen in order to reproduce the results obtained in a recent experiment conducted at the LULI laser facility at Ecole Polytechnique (France) to study accelerated electrons via LHT. The wave and particle spectra are studied and used to produce synthetic diagnostics that show good qualitative agreement with experimental results. Work supported by the European Research Council (Accelerates ERC-2010-AdG 267841).
Wang, Xuesong; Wang, Ting; Tarko, Andrew; Tremont, Paul J
2015-03-01
Combined horizontal and vertical alignments are frequently used in mountainous freeways in China; however, design guidelines that consider the safety impact of combined alignments are not currently available. Past field studies have provided some data on the relationship between road alignment and safety, but the effects of differing combined alignments on either lateral acceleration or safety have not systematically examined. The primary reason for this void in past research is that most of the prior studies used observational methods that did not permit control of the key variables. A controlled parametric study is needed that examines lateral acceleration as drivers adjust their speeds across a range of combined horizontal and vertical alignments. Such a study was conducted in Tongji University's eight-degree-of-freedom driving simulator by replicating the full range of combined alignments used on a mountainous freeway in China. Multiple linear regression models were developed to estimate the effects of the combined alignments on lateral acceleration. Based on these models, domains were calculated to illustrate the results and to assist engineers to design safer mountainous freeways.
R-leaping: accelerating the stochastic simulation algorithm by reaction leaps.
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-28
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps
NASA Astrophysics Data System (ADS)
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-01
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
Magnetic field simulation of wiggler on LUCX accelerator facility using Radia
NASA Astrophysics Data System (ADS)
Sutygina, Y. N.; Harisova, A. E.; Shkitov, D. A.
2016-11-01
A flat wiggler consisting of NdFeB permanent magnets was installed on a compact linear electron accelerator LUCX (KEK) in Japan. After installing the wiggler on LUCX, the experiments on the generation of undulator radiation (UR) in the terahertz wavelength range is planned. To perform the detailed calculations and optimization of UR characteristics, it is necessary to know the parameters of the magnetic field generated in the wiggler. In this paper extended simulation results of wiggler magnetic field over the entire volume between the poles are presented. The obtained in the Radia simulation magnetic field is compared with the field calculated by another code, which is based on the finite element method.
Simulating Electron Effects in Heavy-Ion Accelerators with Solenoid Focusing
Sharp, W. M.; Grote, D. P.; Cohen, R. H.; Friedman, A.; Molvik, A. W.; Vay, J.-L.; Seidl, P. A.; Roy, P. K.; Coleman, J. E.; Haber, I.
2007-06-20
Contamination from electrons is a concern for solenoid-focused ion accelerators being developed for experiments in high-energy-density physics. These electrons, produced directly by beam ions hitting lattice elements or indirectly by ionization of desorbed neutral gas, can potentially alter the beam dynamics, leading to a time-varying focal spot, increased emittance, halo, and possibly electron-ion instabilities. The electrostatic particle-in-cell code WARP is used to simulate electron-cloud studies on the solenoid-transport experiment (STX) at Lawrence Berkeley National Laboratory. We present self-consistent simulations of several STX configurations and compare the results with experimental data in order to calibrate physics parameters in the model.
Simulating Electron Clouds in High-Current Ion Accelerators with Solenoid Focusing
Sharp, W; Grote, D; Cohen, R; Friedman, A; Vay, J; Seidl, P; Roy, P; Coleman, J; Armijo, J; Haber, I
2006-08-15
Contamination from electrons is a concern for the solenoid-focused ion accelerators being developed for experiments in high-energy-density physics (HEDP). These electrons are produced directly by beam ions hitting lattice elements and intercepting diagnostics, or indirectly by ionization of desorbed neutral gas, and they are believed responsible for time dependence of the beam radius, emittance, and focal distance seen on the Solenoid Transport Experiment (STX) at Lawrence Berkeley National Laboratory. The electrostatic particle-in-cell code WARP has been upgraded to included the physics needed to simulate electron-cloud phenomena. We present preliminary self-consistent simulations of STX experiments suggesting that the observed time dependence of the beam stems from a complicated interaction of beam ions, desorbed neutrals, and electrons.
Simulating Electron Effects in Heavy-Ion Accelerators with Solenoid Focusing
Sharp, W M; Grote, D P; Cohen, R H; Friedman, A; Molvik, A W; Vay, J; Seidl, P; Roy, P K; Coleman, J E; Haber, I
2007-06-29
Contamination from electrons is a concern for solenoid-focused ion accelerators being developed for experiments in high-energy-density physics. These electrons, produced directly by beam ions hitting lattice elements or indirectly by ionization of desorbed neutral gas, can potentially alter the beam dynamics, leading to a time-varying focal spot, increased emittance, halo, and possibly electron-ion instabilities. The electrostatic particle-in-cell code WARP is used to simulate electron-cloud studies on the solenoid-transport experiment (STX) at Lawrence Berkeley National Laboratory. We present self-consistent simulations of several STX configurations and compare the results with experimental data in order to calibrate physics parameters in the model.
Jin, Q. Y.; Li, Zh. M.; Liu, W.; Zhao, H. Y. Zhang, J. J.; Sha, Sh.; Zhang, Zh. L.; Zhang, X. Zh.; Sun, L. T.; Zhao, H. W.
2014-07-15
The direct plasma injection scheme (DPIS) has been being studied at Institute of Modern Physics since several years ago. A C{sup 6+} beam with peak current of 13 mA, energy of 593 keV/u has been successfully achieved after acceleration with DPIS method. To understand the process of DPIS, some simulations have been done as follows. First, with the total current intensity and the relative yields of different charge states for carbon ions measured at the different distance from the target, the absolute current intensities and time-dependences for different charge states are scaled to the exit of the laser ion source in the DPIS. Then with these derived values as the input parameters, the extraction of carbon beam from the laser ion source to the radio frequency quadrupole with DPIS is simulated, which is well agreed with the experiment results.
Jin, Q Y; Zhao, H Y; Zhang, J J; Sha, Sh; Zhang, Zh L; Li, Zh M; Liu, W; Zhang, X Zh; Sun, L T; Zhao, H W
2014-07-01
The direct plasma injection scheme (DPIS) has been being studied at Institute of Modern Physics since several years ago. A C(6+) beam with peak current of 13 mA, energy of 593 keV/u has been successfully achieved after acceleration with DPIS method. To understand the process of DPIS, some simulations have been done as follows. First, with the total current intensity and the relative yields of different charge states for carbon ions measured at the different distance from the target, the absolute current intensities and time-dependences for different charge states are scaled to the exit of the laser ion source in the DPIS. Then with these derived values as the input parameters, the extraction of carbon beam from the laser ion source to the radio frequency quadrupole with DPIS is simulated, which is well agreed with the experiment results.
Yoganandan, Narayan; Pintar, Frank A; Schlick, Michael; Humm, John R; Voo, Liming; Merkle, Andrew; Kleinberger, Michael
2015-09-18
The objective of the study was to develop a simple device, Vertical accelerator (Vertac), to apply vertical impact loads to Post Mortem Human Subject (PMHS) or dummy surrogates because injuries sustained in military conflicts are associated with this vector; example, under-body blasts from explosive devices/events. The two-part mechanically controlled device consisted of load-application and load-receiving sections connected by a lever arm. The former section incorporated a falling weight to impact one end of the lever arm inducing a reaction at the other/load-receiving end. The "launch-plate" on this end of the arm applied the vertical impact load/acceleration pulse under different initial conditions to biological/physical surrogates, attached to second section. It is possible to induce different acceleration pulses by using varying energy absorbing materials and controlling drop height and weight. The second section of Vertac had the flexibility to accommodate different body regions for vertical loading experiments. The device is simple and inexpensive. It has the ability to control pulses and flexibility to accommodate different sub-systems/components of human surrogates. It has the capability to incorporate preloads and military personal protective equipment (e.g., combat helmet). It can simulate vehicle roofs. The device allows for intermittent specimen evaluations (x-ray and palpation, without changing specimen alignment). The two free but interconnected sections can be used to advance safety to military personnel. Examples demonstrating feasibilities of the Vertac device to apply vertical impact accelerations using PMHS head-neck preparations with helmet and booted Hybrid III dummy lower leg preparations under in-contact and launch-type impact experiments are presented.
Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code.
Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza
2013-07-01
The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique.
Simulations of radiation pressure ion acceleration with the VEGA Petawatt laser
NASA Astrophysics Data System (ADS)
Stockhausen, Luca C.; Torres, Ricardo; Conejero Jarque, Enrique
2016-09-01
The Spanish Pulsed Laser Centre (CLPU) is a new high-power laser facility for users. Its main system, VEGA, is a CPA Ti:Sapphire laser which, in its final phase, will be able to reach Petawatt peak powers in pulses of 30 fs with a pulse contrast of 1 :1010 at 1 ps. The extremely low level of pre-pulse intensity makes this system ideally suited for studying the laser interaction with ultrathin targets. We have used the particle-in-cell (PIC) code OSIRIS to carry out 2D simulations of the acceleration of ions from ultrathin solid targets under the unique conditions provided by VEGA, with laser intensities up to 1022 W cm-2 impinging normally on 20 - 60 nm thick overdense plasmas, with different polarizations and pre-plasma scale lengths. We show how signatures of the radiation pressure-dominated regime, such as layer compression and bunch formation, are only present with circular polarization. By passively shaping the density gradient of the plasma, we demonstrate an enhancement in peak energy up to tens of MeV and monoenergetic features. On the contrary linear polarization at the same intensity level causes the target to blow up, resulting in much lower energies and broader spectra. One limiting factor of Radiation Pressure Acceleration is the development of Rayleigh-Taylor like instabilities at the interface of the plasma and photon fluid. This results in the formation of bubbles in the spatial profile of laser-accelerated proton beams. These structures were previously evidenced both experimentally and theoretically. We have performed 2D simulations to characterize this bubble-like structure and report on the dependency on laser and target parameters.
MCNP Neutron Simulations: The Effectiveness of the University of Kentucky Accelerator Laboratory Pit
NASA Astrophysics Data System (ADS)
Jackson, Daniel; Nguyen, Thien An; Hicks, S. F.; Rice, Ben; Vanhoy, J. R.
2015-10-01
The design of the Van de Graaff Particle Accelerator complex at the University of Kentucky is marked by the unique addition of a pit in the main neutron scattering room underneath the neutron source and detection shielding assembly. This pit was constructed as a neutron trap in order to decrease the amount of neutron flux within the laboratory. Such a decrease of background neutron flux effectively reduces as much noise as possible in detection of neutrons scattering off of desired samples to be studied. This project uses the Monte-Carlo N-Particle Transport Code (MCNP) to model the structure of the accelerator complex, gas cell, and the detector's collimator and shielding apparatus to calculate the neutron flux in various sections of the laboratory. Simulations were completed with baseline runs of 107 neutrons of energies 4 MeV and 17 MeV, produced respectively by 3H(p,n)3He and 3H(d,n)4He source reactions. In addition, a comparison model of the complex with simply a floor and no pit was designed, and the respective neutron fluxes of both models were calculated and compared. The results of the simulations seem to affirm the validity of the pit design in significantly reducing the overall neutron flux throughout the accelerator complex, which could be used in future designs to increase the precision and reliability of data. This project was supported in part by the DOE NEUP Grant NU-12-KY-UK-0201-05 and the Donald A. Cowan Physics Institute at the University of Dallas.
Simulations of ion acceleration from ultrathin targets with the VEGA petawatt laser
NASA Astrophysics Data System (ADS)
Stockhausen, Luca C.; Torres, Ricardo; Conejero Jarque, Enrique
2015-05-01
The Spanish Pulsed Laser Centre (CLPU) is a new high-power laser facility for users. Its main system, VEGA, is a CPA Ti:Sapphire laser which, in its final phase, will be able to reach petawatt peak powers in pulses of 30 fs with a pulse contrast of 1 : 1010 at 1 ps. The extremely low level of pre-pulse intensity makes this system ideally suited for studying the laser interaction with ultrathin targets. We have used the particle-in-cell (PIC) code OSIRIS to carry out 2D simulations of the acceleration of ions from ultrathin solid targets under the unique conditions provided by VEGA, with laser intensities up to 1022Wcm-2 impinging normally on 5 - 40 nm thick overdense plasmas, with different polarizations and pre-plasma scale lengths. We show how signatures of the radiation pressure dominated regime, such as layer compression and bunch formation, are only present with circular polarization. By passively shaping the density gradient of the plasma, we demonstrate an enhancement in peak energy up to tens of MeV and monoenergetic features. On the contrary linear polarization at the same intensity level causes the target to blow up, resulting in much lower energies and broader spectra. One limiting factor of Radiation Pressure Acceleration is the development of Rayleigh-Taylor like instabilities at the interface of the plasma and photon fluid. This results in the formation of bubbles in the spatial profile of laser-accelerated proton beams. These structures were previously evidenced both experimentally and theoretically. We have performed 2D simulations to characterize this bubble-like structure and report on the dependency on laser and target parameters.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Saidane, K; Polizu, S; Yahia, L'h
2007-01-01
In this study, we have provided an experimental evaluation of the fatigue behavior of the nitinol (NiTi) endovascular device (peripheral stent). The accelerated fatigue tests were performed using arterial conditions, which mimicked actual physiological conditions. Natural, rubber latex-tubing materials were used to simulate human arteries. The equipment design and the test parameters used allowed for the simulation of a compliant artery and the application of circumferential forces to the device.The stent compliance values were good indicators for tracking the time evolution of fatigue behavior. Moreover, the analyses of changes on the surface morphology and on the chemical composition were used to establish a relationship between surface characteristics and peripheral stent response during 400 million cycles, which is equivalent to 10 yrs of human life. In order to determine the influence of the accelerated fatigue, an evaluation of both mechanical and surface characteristics was carried out before and after testing using the following tests and methods, respectively: radial hoop testing (RH), scanning electron microscope analysis (SEM), auger electron spectroscopy (AES), atomic absorption spectroscopy (AAS), and X-ray photoelectron spectroscopy (XPS). Under these experimental conditions, the studies have shown that after 400 million cycles, the tested stents did not demonstrate any mechanical failure. Moreover, the surface did not undergo any changes in its chemical composition. However, we did observe an increase in roughness and signs of pitting corrosion.
Warp simulations for capture and control of laser-accelerated proton beams
Nurnberg, F; Friedman, A; Grote, D P; Harres, K; Logan, B G; Schollmeier, M; Roth, M
2009-10-22
The capture of laser-accelerated proton beams accompanied by co-moving electrons via a solenoid field has been studied with particle-in-cell simulations. The main advantages of the Warp simulation suite that was used, relative to envelope or tracking codes, are the possibility of including all source parameters energy resolved, adding electrons as second species and considering the non-negligible space-charge forces and electrostatic self-fields. It was observed that the influence of the electrons is of vital importance. The magnetic effect on the electrons out balances the space-charge force. Hence, the electrons are forced onto the beam axis and attract protons. Besides the energy dependent proton density increase on axis, the change in the particle spectrum is also important for future applications. Protons are accelerated/decelerated slightly, electrons highly. 2/3 of all electrons get lost directly at the source and 27% of all protons hit the inner wall of the solenoid.
Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J Andrew
2013-11-12
The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling.
Multi-core/GPU accelerated multi-resolution simulations of compressible flows
NASA Astrophysics Data System (ADS)
Hejazialhosseini, Babak; Rossinelli, Diego; Koumoutsakos, Petros
2010-11-01
We develop a multi-resolution solver for single and multi-phase compressible flow simulations by coupling average interpolating wavelets and local time stepping schemes with high order finite volume schemes. Wavelets allow for high compression rates and explicit control over the error in adaptive representation of the flow field, but their efficient parallel implementation is hindered by the use of traditional data parallel models. In this work we demonstrate that this methodology can be implemented so that it can benefit from the processing power of emerging hybrid multicore and multi-GPU architectures. This is achieved by exploiting task-based parallelism paradigm and the concept of wavelet blocks combined with OpenCL and Intel Threading Building Blocks. The solver is able to handle high resolution jumps and benefits from adaptive time integration using local time stepping schemes as implemented on heterogeneous multi-core/GPU architectures. We demonstrate the accuracy of our method and the performance of our solver on different architectures for 2D simulations of shock-bubble interaction and Richtmeyer-Meshkov instability.
Acceleration of a Particle-in-Cell Code for Space Plasma Simulations with OpenACC
NASA Astrophysics Data System (ADS)
Peng, Ivy Bo; Markidis, Stefano; Vaivads, Andris; Vencels, Juris; Deca, Jan; Lapenta, Giovanni; Hart, Alistair; Laure, Erwin
2015-04-01
We simulate space plasmas with the Particle-in-cell (PIC) method that uses computational particles to mimic electrons and protons in solar wind and in Earth magnetosphere. The magnetic and electric fields are computed by solving the Maxwell's equations on a computational grid. In each PIC simulation step, there are four major phases: interpolation of fields to particles, updating the location and velocity of each particle, interpolation of particles to grids and solving the Maxwell's equations on the grid. We use the iPIC3D code, which was implemented in C++, using both MPI and OpenMP, for our case study. By November 2014, heterogeneous systems using hardware accelerators such as Graphics Processing Unit (GPUs) and the Many Integrated Core (MIC) coprocessors for high performance computing continue growth in the top 500 most powerful supercomputers world wide. Scientific applications for numerical simulations need to adapt to using accelerators to achieve portability and scalability in the coming exascale systems. In our work, we conduct a case study of using OpenACC to offload the computation intensive parts: particle mover and interpolation of particles to grids, in a massively parallel Particle-in-Cell simulation code, iPIC3D, to multi-GPU systems. We use MPI for inter-node communication for halo exchange and communicating particles. We identify the most promising parts suitable for GPUs accelerator by profiling using CrayPAT. We implemented manual deep copy to address the challenges of porting C++ classes to GPU. We document the necessary changes in the exiting algorithms to adapt for GPU computation. We present the challenges and findings as well as our methodology for porting a Particle-in-Cell code to multi-GPU systems using OpenACC. In this work, we will present the challenges, findings and our methodology of porting a Particle-in-Cell code for space applications as follows: We profile the iPIC3D code by Cray Performance Analysis Tool (CrayPAT) and identify
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
GPU accelerated simulations of bluff body flows using vortex particle methods
NASA Astrophysics Data System (ADS)
Rossinelli, Diego; Bergdorf, Michael; Cottet, Georges-Henri; Koumoutsakos, Petros
2010-05-01
We present a GPU accelerated solver for simulations of bluff body flows in 2D using a remeshed vortex particle method and the vorticity formulation of the Brinkman penalization technique to enforce boundary conditions. The efficiency of the method relies on fast and accurate particle-grid interpolations on GPUs for the remeshing of the particles and the computation of the field operators. The GPU implementation uses OpenGL so as to perform efficient particle-grid operations and a CUFFT-based solver for the Poisson equation with unbounded boundary conditions. The accuracy and performance of the GPU simulations and their relative advantages/drawbacks over CPU based computations are reported in simulations of flows past an impulsively started circular cylinder from Reynolds numbers between 40 and 9500. The results indicate up to two orders of magnitude speed up of the GPU implementation over the respective CPU implementations. The accuracy of the GPU computations depends on the Re number of the flow. For Re up to 1000 there is little difference between GPU and CPU calculations but this agreement deteriorates (albeit remaining to within 5% in drag calculations) for higher Re numbers as the single precision of the GPU adversely affects the accuracy of the simulations.
Rider, William; Kamm, J. R.; Zoldi, C. A.; Tomkins, C. D.
2002-01-01
We present detailed spatial analysis comparing experimental data and numerical simulation results for Richtmyer-Meshkov instability experiments of Prestridge et al. and Tomkins et al. These experiments consist, respectively, of one and two diffuse cylinders of sulphur hexafluoride (SF{sub 6}) impulsively accelerated by a Mach 1.2 shockwave in air. The subsequent fluid evolution and mixing is driven by the deposition of baroclinic vorticity at the interface between the two fluids. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods, including a new weighted adaptive Runge-Kutta (WARK) scheme. We quantify the nature of the mixing using using integral measures as well as fractal analysis and continuous wavelet transforms. Our investigation of the gas cylinder configurations follows the path of our earlier studies of the geometrically and dynamically more complex gas 'curtain' experiment. In those studies, we found significant discrepancies in the details of the experimentally measured mixing and the details of the numerical simulations. Here we evaluate the effects of these hydrodynamic integration techniques on the diffuse gas cylinder simulations, which we quantitatively compare with experimental data.
3D simulations of young core-collapse supernova remnants undergoing efficient particle acceleration
NASA Astrophysics Data System (ADS)
Ferrand, Gilles; Safi-Harb, Samar
2016-06-01
Within our Galaxy, supernova remnants are believed to be the major sources of cosmic rays up to the 'knee'. However important questions remain regarding the share of the hadronic and leptonic components, and the fraction of the supernova energy channelled into these components. We address such question by the means of numerical simulations that combine a hydrodynamic treatment of the shock wave with a kinetic treatment of particle acceleration. Performing 3D simulations allows us to produce synthetic projected maps and spectra of the thermal and non-thermal emission, that can be compared with multi-wavelength observations (in radio, X-rays, and γ-rays). Supernovae come in different types, and although their energy budget is of the same order, their remnants have different properties, and so may contribute in different ways to the pool of Galactic cosmic-rays. Our first simulations were focused on thermonuclear supernovae, like Tycho's SNR, that usually occur in a mostly undisturbed medium. Here we present our 3D simulations of core-collapse supernovae, like the Cas A SNR, that occur in a more complex medium bearing the imprint of the wind of the progenitor star.
Fourier analysis of Solar atmospheric numerical simulations accelerated with GPUs (CUDA).
NASA Astrophysics Data System (ADS)
Marur, A.
2015-12-01
Solar dynamics from the convection zone creates a variety of waves that may propagate through the solar atmosphere. These waves are important in facilitating the energy transfer between the sun's surface and the corona as well as propagating energy throughout the solar system. How and where these waves are dissipated remains an open question. Advanced 3D numerical simulations have furthered our understanding of the processes involved. Fourier transforms to understand the nature of the waves by finding the frequency and wavelength of these waves through the simulated atmosphere, as well as the nature of their propagation and where they get dissipated. In order to analyze the different waves produced by the aforementioned simulations and models, Fast Fourier Transform algorithms will be applied. Since the processing of the multitude of different layers of the simulations (of the order of several 100^3 grid points) would be time intensive and inefficient on a CPU, CUDA, a computing architecture that harnesses the power of the GPU, will be used to accelerate the calculations.
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
NASA Astrophysics Data System (ADS)
Abrass, Ahmad; Özel, Mahmut; Groche, Peter
2011-08-01
Roll forming is an effective and economical sheet forming process that is well-established in industry for the manufacturing of large quantities of profile-shaped products. In cold-roll forming, a metal sheet is fed through successive pairs of forming rolls until it is formed into the desired cross-sectional profile. The deformation of the sheet is complex. For this reason, the theoretical analysis is very difficult, especially, if the strain distribution and the occurring forces are to be determined [1]. The design of roll forming processes depends upon a large number of variables, which mainly relies upon experience based knowledge [2]. In order to overcome the challenges and to optimize these processes, FE-simulations are used. The simulation of these processes is time-consuming. The main objective of this work is to accelerate the simulation of roll forming processes by taking advantage of their steady state properties. These properties allow the transformation of points on the sheet metal according to a mathematical function. This transformation function is determined with the help of the finite element method and then the next forming steps are computed, based on the generated function. With the aid of this developed method, the computational time can be reduced effectively. The details of the FE-model and new numerical algorithms will be described. Furthermore, the results of numerical simulations with and without the application of the developed method will be compared regarding computational time and numerical results.
Nguyen, Trung Dac; Carrillo, Jan-Michael Y; Dobrynin, Andrey V; Brown, W Michael
2013-01-08
Numerous issues have disrupted the trend for increasing computational performance with faster CPU clock frequencies. In order to exploit the potential performance of new computers, it is becoming increasingly desirable to re-evaluate computational physics methods and models with an eye toward approaches that allow for increased concurrency and data locality. The evaluation of long-range Coulombic interactions is a common bottleneck for molecular dynamics simulations. Enhanced truncation approaches have been proposed as an alternative method and are particularly well-suited for many-core architectures and GPUs due to the inherent fine-grain parallelism that can be exploited. In this paper, we compare efficient truncation-based approximations to evaluation of electrostatic forces with the more traditional particle-particle particle-mesh (P(3)M) method for the molecular dynamics simulation of polyelectrolyte brush layers. We show that with the use of GPU accelerators, large parallel simulations using P(3)M can be greater than 3 times faster due to a reduction in the mesh-size required. Alternatively, using a truncation-based scheme can improve performance even further. This approach can be up to 3.9 times faster than GPU-accelerated P(3)M for many polymer systems and results in accurate calculation of shear velocities and disjoining pressures for brush layers. For configurations with highly nonuniform charge distributions, however, we find that it is more efficient to use P(3)M; for these systems, computationally efficient parametrizations of the truncation-based approach do not produce accurate counterion density profiles or brush morphologies.
Convergence acceleration for partitioned simulations of the fluid-structure interaction in arteries
NASA Astrophysics Data System (ADS)
Radtke, Lars; Larena-Avellaneda, Axel; Debus, Eike Sebastian; Düster, Alexander
2016-06-01
We present a partitioned approach to fluid-structure interaction problems arising in analyses of blood flow in arteries. Several strategies to accelerate the convergence of the fixed-point iteration resulting from the coupling of the fluid and the structural sub-problem are investigated. The Aitken relaxation and variants of the interface quasi-Newton -least-squares method are applied to different test cases. A hybrid variant of two well-known variants of the interface quasi-Newton-least-squares method is found to perform best. The test cases cover the typical boundary value problem faced when simulating the fluid-structure interaction in arteries, including a strong added mass effect and a wet surface which accounts for a large part of the overall surface of each sub-problem. A rubber-like Neo Hookean material model and a soft-tissue-like Holzapfel-Gasser-Ogden material model are used to describe the artery wall and are compared in terms of stability and computational expenses. To avoid any kind of locking, high-order finite elements are used to discretize the structural sub-problem. The finite volume method is employed to discretize the fluid sub-problem. We investigate the influence of mass-proportional damping and the material model chosen for the artery on the performance and stability of the acceleration strategies as well as on the simulation results. To show the applicability of the partitioned approach to clinical relevant studies, the hemodynamics in a pathologically deformed artery are investigated, taking the findings of the test case simulations into account.
Jung, Jaewoon; Naurse, Akira; Kobayashi, Chigusa; Sugita, Yuji
2016-10-11
The graphics processing unit (GPU) has become a popular computational platform for molecular dynamics (MD) simulations of biomolecules. A significant speedup in the simulations of small- or medium-size systems using only a few computer nodes with a single or multiple GPUs has been reported. Because of GPU memory limitation and slow communication between GPUs on different computer nodes, it is not straightforward to accelerate MD simulations of large biological systems that contain a few million or more atoms on massively parallel supercomputers with GPUs. In this study, we develop a new scheme in our MD software, GENESIS, to reduce the total computational time on such computers. Computationally intensive real-space nonbonded interactions are computed mainly on GPUs in the scheme, while less intensive bonded interactions and communication-intensive reciprocal-space interactions are performed on CPUs. On the basis of the midpoint cell method as a domain decomposition scheme, we invent the single particle interaction list for reducing the GPU memory usage. Since total computational time is limited by the reciprocal-space computation, we utilize the RESPA multiple time-step integration and reduce the CPU resting time by assigning a subset of nonbonded interactions on CPUs as well as on GPUs when the reciprocal-space computation is skipped. We validated our GPU implementations in GENESIS on BPTI and a membrane protein, porin, by MD simulations and an alanine-tripeptide by REMD simulations. Benchmark calculations on TSUBAME supercomputer showed that an MD simulation of a million atoms system was scalable up to 256 computer nodes with GPUs.
Mirkhani, Nima; Davoudi, Mohammad Reza; Hanafizadeh, Pedram; Javidi, Daryoosh; Saffarian, Niloofar
2016-09-01
Numerical simulation of the bileaflet mechanical heart valves (BMHVs) has been of interest for many researchers due to its capability of predicting hemodynamic performance. A lot of studies have tried to simulate this three-dimensional complex flow in order to analyze the effect of different valve designs on the blood flow pattern. However, simplified models and prescribed motion for the leaflets were utilized. In this paper, transient complex blood flow in the location of ascending aorta has been investigated in a realistic model by fully coupled simulation. Geometry model for the aorta and the replaced valve is constructed based on the medical images and extracted point clouds. A 23-mm On-X Medical BMHV as the new generation design has been selected for the flow field analysis. The two-way coupling simulation is conducted throughout the accelerating phase in order to obtain valve dynamics in the opening process. The complex flow field in the hinge recess is captured precisely for all leaflet positions and recirculating zones and elevated shear stress areas have been observed. Results indicate that On-X valve yields relatively less transvalvular pressure gradient which would lower cardiac external work. Furthermore, converging inlet leads to a more uniform flow and consequently less turbulent eddies. However, the leaflets cannot open fully due to middle diffuser-shaped orifice. In addition, asymmetric butterfly-shaped hinge design and converging orifice leads to better hemodynamic performance. With the help of two-way fluid solid interaction simulation, leaflet angle follows the experimental trends more precisely rather than the prescribed motion in previous 3D simulations.
Particle in Cell Simulations of the Pulsar Y-Point -- Nature of the Accelerating Electric Field
NASA Astrophysics Data System (ADS)
Belyaev, Mikhail
2016-06-01
Over the last decade, satellite observations have yielded a wealth of data on pulsed high-energy emission from pulsars. Several different models have been advanced to fit this data, all of which “paint” the emitting region onto a different portion of the magnetosphere.In the last few years, particle in cell simulations of pulsar magnetospheres have reached the point where they are able to self-consistently model particle acceleration and dissipation. One of the key findings of these simulations is that the region of the current sheet in and around the Y-point provides the highest rate of dissipation of Poynting flux (Belyaev 2015a). On the basis of this physical evidence, it is quite plausible that this region should be associated with the pulsed high energy emission from pulsars. We present high resolution PIC simulations of an axisymmetric pulsar magnetosphere, which are run using PICsar (Belyaev 2015b). These simulations focus on the particle dynamics and electric fields in and around the Y-point region. We run two types of simulations -- first, a force-free magnetosphere and second, a magnetosphere with a gap between the return current layer and the outflowing plasma in the polar wind zone. The latter setup is motivated by studies of pair production with general relativity (Philippov et al. 2015, Belyaev & Parfrey (in preparation)). In both cases, we find that the Y-point and the current sheet in its direct vicinity act like an “electric particle filter” outwardly accelerating particles of one sign of charge while returning the other sign of charge back to the pulsar. We argue that this is a natural behavior of the plasma as it tries to adjust to a solution that is as close to force-free as possible. As a consequence, a large E dot J develops in the vicinity of the Y-point leading to dissipation of Poynting flux. Our work is relevant for explaining the plasma physical mechanisms underlying pulsed high energy emission from pulsars.
Ha, G.; Power, J.; Kim, S. H.; Gai, W.; Kim, K.-J.; Cho, M. H.; Namkung, W.
2012-12-21
Double triangular current profile (DT) gives a high transformer ratio which is the determining factor of the performance of collinear wakefield accelerator. This current profile can be generated using the emittance exchange (EEX) beam line. Argonne Wakefield Accelerator (AWA) facility plans to generate DT using the EEX beam line. We conducted start-to-end simulation for the AWA beam line using PARMELA code. Also, we discuss requirements of beam parameters for the generation of DT.
Simulations of ion acceleration at non-relativistic shocks. III. Particle diffusion
Caprioli, D.; Spitkovsky, A.
2014-10-10
We use large hybrid (kinetic-protons-fluid-electrons) simulations to investigate the transport of energetic particles in self-consistent electromagnetic configurations of collisionless shocks. In previous papers of this series, we showed that ion acceleration may be very efficient (up to 10%-20% in energy), and outlined how the streaming of energetic particles amplifies the upstream magnetic field. Here, we measure particle diffusion around shocks with different strengths, finding that the mean free path for pitch-angle scattering of energetic ions is comparable with their gyroradii calculated in the self-generated turbulence. For moderately strong shocks, magnetic field amplification proceeds in the quasi-linear regime, and particles diffuse according to the self-generated diffusion coefficient, i.e., the scattering rate depends only on the amount of energy in modes with wavelengths comparable with the particle gyroradius. For very strong shocks, instead, the magnetic field is amplified up to non-linear levels, with most of the energy in modes with wavelengths comparable to the gyroradii of highest-energy ions, and energetic particles experience Bohm-like diffusion in the amplified field. We also show how enhanced diffusion facilitates the return of energetic particles to the shock, thereby determining the maximum energy that can be achieved in a given time via diffusive shock acceleration. The parameterization of the diffusion coefficient that we derive can be used to introduce self-consistent microphysics into large-scale models of cosmic ray acceleration in astrophysical sources, such as supernova remnants and clusters of galaxies.
NASA Astrophysics Data System (ADS)
Kato, Tsunehiko N.
2015-04-01
We herein investigate shock formation and particle acceleration processes for both protons and electrons in a quasi-parallel high-Mach-number collisionless shock through a long-term, large-scale, particle-in-cell simulation. We show that both protons and electrons are accelerated in the shock and that these accelerated particles generate large-amplitude Alfvénic waves in the upstream region of the shock. After the upstream waves have grown sufficiently, the local structure of the collisionless shock becomes substantially similar to that of a quasi-perpendicular shock due to the large transverse magnetic field of the waves. A fraction of protons are accelerated in the shock with a power-law-like energy distribution. The rate of proton injection to the acceleration process is approximately constant, and in the injection process, the phase-trapping mechanism for the protons by the upstream waves can play an important role. The dominant acceleration process is a Fermi-like process through repeated shock crossings of the protons. This process is a “fast” process in the sense that the time required for most of the accelerated protons to complete one cycle of the acceleration process is much shorter than the diffusion time. A fraction of the electrons are also accelerated by the same mechanism, and have a power-law-like energy distribution. However, the injection does not enter a steady state during the simulation, which may be related to the intermittent activity of the upstream waves. Upstream of the shock, a fraction of the electrons are pre-accelerated before reaching the shock, which may contribute to steady electron injection at a later time.
Kato, Tsunehiko N.
2015-04-01
We herein investigate shock formation and particle acceleration processes for both protons and electrons in a quasi-parallel high-Mach-number collisionless shock through a long-term, large-scale, particle-in-cell simulation. We show that both protons and electrons are accelerated in the shock and that these accelerated particles generate large-amplitude Alfvénic waves in the upstream region of the shock. After the upstream waves have grown sufficiently, the local structure of the collisionless shock becomes substantially similar to that of a quasi-perpendicular shock due to the large transverse magnetic field of the waves. A fraction of protons are accelerated in the shock with a power-law-like energy distribution. The rate of proton injection to the acceleration process is approximately constant, and in the injection process, the phase-trapping mechanism for the protons by the upstream waves can play an important role. The dominant acceleration process is a Fermi-like process through repeated shock crossings of the protons. This process is a “fast” process in the sense that the time required for most of the accelerated protons to complete one cycle of the acceleration process is much shorter than the diffusion time. A fraction of the electrons are also accelerated by the same mechanism, and have a power-law-like energy distribution. However, the injection does not enter a steady state during the simulation, which may be related to the intermittent activity of the upstream waves. Upstream of the shock, a fraction of the electrons are pre-accelerated before reaching the shock, which may contribute to steady electron injection at a later time.
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2012-05-01
The effectiveness of the actual annealing strategy in finite-time optimization by simulated annealing (SA) is analyzed by focusing on the search function of the relaxation dynamics observed in the multimodal landscape of the cost function. The rate-cycling experiment, which was introduced in the previous study [M. Hasegawa, Phys. Rev. EPLEEE81063-651X 10.1103/PhysRevE.83.036708 83, 036708 (2011)] to examine the role of the relaxation dynamics in optimization, and the temperature-cycling experiment, which was developed for a laboratory experiment on relaxation-related phenomena, are conducted on two types of random traveling salesman problems (TSPs). In each experiment, the SA search starting from a quenched solution is performed systematically under a nonmonotonic temperature control used in the actual heat treatment of metals and glasses. The results show that, as in the previous monotonic cooling from a random solution, the optimizing ability is enhanced by allocating a lot of time to the search performed near an effective intermediate temperature irrespective of the annealing technique. In this productive phase, the relaxation dynamics successfully function as an optimizer and the relevant characteristics analogous to the stabilization phenomenon and the acceleration of relaxation, which are observed in glass-forming materials, play favorable roles in the present optimization. This nonmonotonic approach also has the advantage of a wider operation range of the effective relaxation dynamics, and in conclusion, the actual annealing strategy is useful and more workable than the conventional slow-cooling strategy, at least for the present TSPs. Further discussion is given of an illuminating aspect of computational physics analysis in the optimization algorithm research.
Accelerating atomistic simulations through self-learning bond-boost hyperdynamics
Perez, Danny; Voter, Arthur F
2008-01-01
By altering the potential energy landscape on which molecular dynamics are carried out, the hyperdynamics method of Voter enables one to significantly accelerate the simulation state-to-state dynamics of physical systems. While very powerful, successful application of the method entails solving the subtle problem of the parametrization of the so-called bias potential. In this study, we first clarify the constraints that must be obeyed by the bias potential and demonstrate that fast sampling of the biased landscape is key to the obtention of proper kinetics. We then propose an approach by which the bond boost potential of Miron and Fichthorn can be safely parametrized based on data acquired in the course of a molecular dynamics simulation. Finally, we introduce a procedure, the Self-Learning Bond Boost method, in which the parametrization is step efficiently carried out on-the-fly for each new state that is visited during the simulation by safely ramping up the strength of the bias potential up to its optimal value. The stability and accuracy of the method are demonstrated.
Saxena, A. K.; Kaushik, T. C.; Gupta, Satish C.
2010-03-15
Two low energy (1.6 and 8 kJ) portable electrically exploding foil accelerators are developed for moderately high pressure shock studies at small laboratory scale. Projectile velocities up to 4.0 km/s have been measured on Kapton flyers of thickness 125 {mu}m and diameter 8 mm, using an in-house developed Fabry-Perot velocimeter. An asymmetric tilt of typically few milliradians has been measured in flyers using fiber optic technique. High pressure impact experiments have been carried out on tantalum, and aluminum targets up to pressures of 27 and 18 GPa, respectively. Peak particle velocities at the target-glass interface as measured by Fabry-Perot velocimeter have been found in good agreement with the reported equation of state data. A one-dimensional hydrodynamic code based on realistic models of equation of state and electrical resistivity has been developed to numerically simulate the flyer velocity profiles. The developed numerical scheme is validated against experimental and simulation data reported in literature on such systems. Numerically computed flyer velocity profiles and final flyer velocities have been found in close agreement with the previously reported experimental results with a significant improvement over reported magnetohydrodynamic simulations. Numerical modeling of low energy systems reported here predicts flyer velocity profiles higher than experimental values, indicating possibility of further improvement to achieve higher shock pressures.
Mailhiot, C.
1997-10-01
In response to the unprecedented national security challenges derived from the end of nuclear testing, the Defense Programs of the Department of Energy has developed a long-term strategic plan based on a vigorous Science-Based Stockpile Stewardship (SBSS) program. The main objective of the SBSS program is to ensure confidence in the performance, safety, and reliability of the stockpile on the basis of a fundamental science-based approach. A central element of this approach is the development of predictive, full-physics, full-scale computer simulation tools. As a critical component of the SBSS program, the Accelerated Strategic Computing Initiative (ASCI) was established to provide the required advances in computer platforms and to enable predictive, physics-based simulation technologies. Foremost among the key elements needed to develop predictive simulation capabilities, the development of improved physics-based materials models has been universally identified as one of the highest-priority, highest-leverage activity. We indicate some of the materials modeling issues of relevance to stockpile materials and illustrate how the ASCI program will enable the tools necessary to advance the state-of-the-art in the field of computational condensed matter and materials physics.
NASA Astrophysics Data System (ADS)
Underwood, Thomas; Loebner, Keith; Cappelli, Mark
2016-10-01
In this work, the suitability of a pulsed deflagration accelerator to simulate the interaction of edge-localized modes with plasma first wall materials is investigated. Experimental measurements derived from a suite of diagnostics are presented that focus on the both the properties of the plasma jet and the manner in which such jets couple with material interfaces. Detailed measurements of the thermodynamic plasma state variables within the jet are presented using a quadruple Langmuir probe operating in current-saturation mode. This data in conjunction with spectroscopic measurements of H α Stark broadening via a fast-framing, intensified CCD camera provide spatial and temporal measurements of how the plasma density and temperature scale as a function of input energy. Using these measurements, estimates for the energy flux associated with the deflagration accelerator are found to be completely tunable over a range spanning 150 MW m-2 - 30 GW m-2. The plasma-material interface is investigated using tungsten tokens exposed to the plasma plume under variable conditions. Visualizations of resulting shock structures are achieved through Schlieren cinematography and energy transfer dynamics are discussed by presenting temperature measurements of exposed materials. This work is supported by the U.S. Department of Energy Stewardship Science Academic Program in addition to the National Defense Science Engineering Graduate Fellowship.
Bal, Kristof M; Neyts, Erik C
2015-10-13
The hyperdynamics method is a powerful tool to simulate slow processes at the atomic level. However, the construction of an optimal hyperdynamics potential is a task that is far from trivial. Here, we propose a generally applicable implementation of the hyperdynamics algorithm, borrowing two concepts from metadynamics. First, the use of a collective variable (CV) to represent the accelerated dynamics gives the method a very large flexibility and simplicity. Second, a metadynamics procedure can be used to construct a suitable history-dependent bias potential on-the-fly, effectively turning the algorithm into a self-learning accelerated molecular dynamics method. This collective variable-driven hyperdynamics (CVHD) method has a modular design: both the local system properties on which the bias is based, as well as the characteristics of the biasing method itself, can be chosen to match the needs of the considered system. As a result, system-specific details are abstracted from the biasing algorithm itself, making it extremely versatile and transparent. The method is tested on three model systems: diffusion on the Cu(001) surface and nickel-catalyzed methane decomposition, as examples of “reactive” processes with a bond-length-based CV, and the folding of a long polymer-like chain, using a set of dihedral angles as a CV. Boost factors up to 109, corresponding to a time scale of seconds, could be obtained while still accurately reproducing correct dynamics.
Benchmarked Simulations of Slow Capillary Discharges for Laser-Plasma Accelerators
NASA Astrophysics Data System (ADS)
Johnson, Jeffrey; Colella, Phillip; Geddes, Cameron; Mittelberger, Daniel; Bulanov, Stepan; Esarey, Eric; Leemans, Wim; Applied Numerical Algorithms Group (Lbl) Team; Loasis Laboratory (Lbl) Team
2011-10-01
We report our progress on a non-equilibrium, 2-temperature plasma model used for slow capillary discharges pertinent to laser-plasma accelerators. In these experiments, energy transport plays a major role in the formation of a plasma channel, which is used to guide the laser and enhance acceleration. We describe a series of simulations used to study the effects of electrical and thermal conduction, diffusion, and externally-applied magnetic fields in present and ongoing experiments with relevant geometries and densities. Scylla, a 1D cylindrical plasma/hydro code, was used to explore transport models and to resolve the radial profile of the plasma within the capillary. It has also been benchmarked against existing codes and experimental data. Since the capillary has 3D features such as gas feed slots, we have begun implementing a multi-dimensional AMR plasma model that solves the governing equations on irregular domains. Application to the BELLA Project at LBNL will be discussed. This work was supported by the Department of En- ergy under contract number DE-AC02-05-CH11231.
Vlasov Simulations of Ladder Climbing and Autoresonant Acceleration of Langmuir Waves
NASA Astrophysics Data System (ADS)
Hara, Kentaro; Barth, Ido; Kaminski, Erez; Dodin, Ilya; Fisch, Nathaniel
2016-10-01
The energy of plasma waves can be moved up and down the spectrum using chirped modulations of plasma parameters, which can be driven by external fields. Depending on the discreteness of the wave spectrum, this phenomenon is called ladder climbing (LC) or autroresonant acceleration (AR) of plasmons, and was first proposed by Barth et al. based on a linear fluid model. Here, we report a demonstration of LC/AR from first principles using fully nonlinear Vlasov simulations of collisionless bounded plasma. We show that, in agreement to the basic theory, plasmons survive substantial transformations of the spectrum and are destroyed only when their wave numbers become large enough to trigger Landau damping. The work was supported by the NNSA SSAA Program through DOE Research Grant No. DE-NA0002948 and the DTRA Grant No. HDTRA1-11-1-0037.
Accelerated equilibrium core composition search using a new MCNP-based simulator
NASA Astrophysics Data System (ADS)
Seifried, Jeffrey E.; Gorman, Phillip M.; Vujic, Jasmina L.; Greenspan, Ehud
2014-06-01
MocDown is a new Monte Carlo depletion and recycling simulator which couples neutron transport with MCNP and transmutation with ORIGEN. This modular approach to depletion allows for flexible operation by incorporating the accelerated progression of a complex fuel processing scheme towards equilibrium and by allowing for the online coupling of thermo-fluids feedback. MocDown also accounts for the variation of decay heat with fuel isotopics evolution. In typical cases, MocDown requires just over a day to find the equilibrium core composition for a multi-recycling fuel cycle, with a self-consistent thermo-fluids solution-a task that required between one and two weeks using previous Monte Carlo-based approaches.
GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Kostur, M.; Machura, L.
2015-06-01
This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.
Drozhdin, A.; Mokhov, N.; Parker, B.
1994-02-01
The consequences of an accidental beam loss in superconducting accelerators and colliders of the next generation range from the mundane to rather dramatic, i.e., from superconducting magnet quench, to overheating of critical components, to a total destruction of some units via explosion. Specific measures are required to minimize and eliminate such events as much as practical. In this paper we study such accidents taking the Superconducting Supercollider complex as an example. Particle tracking, beam loss and energy deposition calculations were done using the realistic machine simulation with the Monte-Carlo codes MARS 12 and STRUCT. Protective measures for minimizing the damaging effects of prefire and misfire of injection and extraction kicker magnets are proposed here.
Current status of MCNP6 as a simulation tool useful for space and accelerator applications
Mashnik, Stepan G; Bull, Jeffrey S; Hughes, H. Grady; Prael, Richard E; Sierk, Arnold J
2012-07-20
For the past several years, a major effort has been undertaken at Los Alamos National Laboratory (LANL) to develop the transport code MCNP6, the latest LANL Monte-Carlo transport code representing a merger and improvement of MCNP5 and MCNPX. We emphasize a description of the latest developments of MCNP6 at higher energies to improve its reliability in calculating rare-isotope production, high-energy cumulative particle production, and a gamut of reactions important for space-radiation shielding, cosmic-ray propagation, and accelerator applications. We present several examples of validation and verification of MCNP6 compared to a wide variety of intermediate- and high-energy experimental data on reactions induced by photons, mesons, nucleons, and nuclei at energies from tens of MeV to about 1 TeV/nucleon, and compare to results from other modern simulation tools.
NASA Astrophysics Data System (ADS)
Hussein, Amina; Batson, Thomas; Krushelnick, Karl; Willingale, Louise; Arefiev, Alex; Wang, Tao; Nilson, Phil; Froula, Dustin; Haberberger, Dan; Davies, Andrew; Theobald, Wolfgang; Williams, Jackson; Chen, Hui
2016-10-01
The OMEGA EP laser system is used to study channeling phenomena and direct laser acceleration (DLA) through an underdense plasma. The interaction of a ps laser pulse with a subcritical density CH plasma plume results in the expulsion of electron along the laser axis, forming a positively charged channel. Electrons confined within this channel are subject to the action of the laser field as well as the transverse electric field of the channel, resulting the DLA of these electrons and the formation of a high energy electron beam. We have performed 2D simulations of ultra-intense laser radiation with underdense plasma using the PIC code EPOCH to investigate electron densities and self-consistently generated electric fields, as well as electron trajectories. This work was supported by the National Laser Users' Facility (NLUF), DOE.
Simulation of high-average power windows for accelerator production of tritium
Cummings, K A; Daily, L D; Mayhall, D J; Nelson, S D; Salem, J; Shang, C C
1998-08-20
Development of a robust, high-average-power (210 kW, CW) microwave transmission line system for the Accelerator Production of Tritium (APT) facility is a stringent engineering and operational requirement. One key component in this RF transmission system is the vacuum barrier window. The requirement of high-power handling capability coupled to the desirability of good mean time to failure characteristics can be treated substantially with a set of microwave, thermal-structural, and Weibull analysis codes. In this paper, we examine realistic 3-D engineering models of the ceramic windows. We model the detailed cooling circuit and make use of accurate heat deposition models for the RF. This input and simulation detail is used to analyze the thermal- structural induced stresses in baseline coaxial window configurations. We also use a Weibull-distribution failure.
Full-scale accelerated pavement testing of Texas Mobile Load Simulator
Chen, D.H.; Hugo, F.
1998-09-01
This paper presents the test results from full-scale accelerated pavement testing with the Texas Mobile Load Simulator. Data from in-situ instrumentation and nondestructive testing were collected and analyzed at different loading stages to assess material property changes under accelerated loading. Forensic studies were made to study material characteristics in the longitudinal and transverse directions. It was found that at the early stage of trafficking the test pad responded to falling weight deflectometer (FWD) load linearly, not only over the whole pavement system but also within individual layers. Before mobile load simulator testing, FWD data indicated the weakest area exists at the left wheel path (LWP) of 7.5-m line (7.5L). Later, this weak area was confirmed to have the highest rutting and the most intensive cracking. The dynamic cone penetration results showed that the base at this location was at its weakest. Also, at 7.5L the dry density was lowest, {approximately}7% lower with a moisture content {approximately}8% higher than the adjacent area. The LWP had higher FWD deflections than the right wheel path (RWP), and consequently the LWP manifested more rutting. This proved to be primarily due to differences in moisture content. This was probably because more water infiltrated in the area during rain due to manifestation of more extensive cracking during early phases of trafficking. The maximum surface deflection values increased as trafficking increased in the left and right wheel paths due to pavement deterioration, while deflection for the center remained constant because of the lack of traffic loading. The LWP had more rutting than the RWP and this correlated with the measured FWL deflections prior to trafficking. The WI values increased as trafficking increased for the LWP and RWP due to pavement deterioration. The majority (>60%) of rutting was from the 300-mm uncrushed river gravel base.
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
Accelerating Simulation of Seismic Wave Propagation by Multi-GPUs (Invited)
NASA Astrophysics Data System (ADS)
Okamoto, T.; Takenaka, H.; Nakamura, T.; Aoki, T.
2010-12-01
Simulation of seismic wave propagation is essential in modern seismology: the effects of irregular topography of the surface, internal discontinuities and heterogeneity on the seismic waveforms must be precisely modeled in order to probe the Earth's and other planets' interiors, to study the earthquake sources, and to evaluate the strong ground motions due to earthquakes. Devices with high computing performance are necessary because in large scale simulations more than one billion of grid points are required. GPU (Graphics Processing Unit) is a remarkable device for its many core architecture with more-than-one-hundred processing units, and its high memory bandwidth. Now GPU delivers extremely high computing performance (more than one tera-flops in single-precision arithmetic) at a reduced power and cost compared to conventional CPUs. The simulation of seismic wave propagation is a memory intensive problem which involves large amount of data transfer between the memory and the arithmetic units while the number of arithmetic calculations is relatively small. Therefore the simulation should benefit from the high memory bandwidth of the GPU. Thus several approaches to adopt GPU to the simulation of seismic wave propagation have been emerging (e.g., Komatitsch et al., 2009; Micikevicius, 2009; Michea and Komatitsch, 2010; Aoi et al., SSJ 2009, JPGU 2010; Okamoto et al., SSJ 2009, SACSIS 2010). In this paper we describe our approach to accelerate the simulation of seismic wave propagation based on the finite-difference method (FDM) by adopting multi-GPU computing. The finite-difference scheme we use is the three-dimensional, velocity-stress staggered grid scheme (e.g., Grave 1996; Moczo et al., 2007) for heterogeneous medium with perfect elasticity (incorporation of an-elasticity is underway). We use the GPUs (NVIDIA S1070, 1.44 GHz) installed in the TSUBAME grid cluster in the Global Scientific Information and Computing Center, Tokyo Institute of Technology and NVIDIA
A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator
Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein
2014-01-01
Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm2× 10 cm2 filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804
A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator.
Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein
2014-01-01
Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm(2)× 10 cm(2) filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model.
Oleg E. Krivosheev et al.
2001-07-02
This paper describes the experimental setup and presents studies of absorbed doses in different metals and dielectrics along with corresponding Monte Carlo energy deposition simulations. Experiments were conducted using a 5 MeV electron accelerator. We used several Monte Carlo code systems, namely MARS, MCNP, and GEANT to simulate the absorbed doses under the same conditions as in experiment. We compare calculated and measured high and low absorbed doses (from few kGy to hundreds kGy) and discuss the applicability of these computer codes for applied accelerator dosimetry.
NASA Astrophysics Data System (ADS)
Sinha, N.; York, B. J.; Dash, S. M.; Drabczuk, R.; Rolader, G. E.
1992-07-01
This paper describes the development of an advanced CFD simulation capability in support of the U.S. Air Force Armament Directorate's ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, high pressure thermochemistry, etc. Selected ram accelerator simulations are presented which serve to exhibit the CRAFT code's capabilities and identify some of the principal research/design issues.
Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations
Liu, Y; Grossman, JC
2014-12-01
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Accelerating the design of solar thermal fuel materials through high throughput simulations.
Liu, Yun; Grossman, Jeffrey C
2014-12-10
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures
NASA Astrophysics Data System (ADS)
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-01
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures.
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-07
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations.
Mills, F.; Makino, Kyoko; Berz, Martin; Johnstone, C.
2010-09-01
With the U.S. experimental effort in HEP largely located at laboratories supporting the operations of large, highly specialized accelerators, colliding beam facilities, and detector facilities, the understanding and prediction of high energy particle accelerators becomes critical to the success, overall, of the DOE HEP program. One area in which small businesses can contribute to the ongoing success of the U.S. program in HEP is through innovations in computer techniques and sophistication in the modeling of high-energy accelerators. Accelerator modeling at these facilities is performed by experts with the product generally highly specific and representative only of in-house accelerators or special-interest accelerator problems. Development of new types of accelerators like FFAGs with their wide choices of parameter modifications, complicated fields, and the simultaneous need to efficiently handle very large emittance beams requires the availability of new simulation environments to assure predictability in operation. In this, ease of use and interfaces are critical to realizing a successful model, or optimization of a new design or working parameters of machines. In Phase I, various core modules for the design and analysis of FFAGs were developed and Graphical User Interfaces (GUI) have been investigated instead of the more general yet less easily manageable console-type output COSY provides.
NASA Astrophysics Data System (ADS)
Mauch, Florian; Fleischle, David; Lyda, Wolfram; Osten, Wolfgang; Krug, Torsten; Häring, Reto
2011-05-01
Simulation of grating spectrometers constitutes the problem of propagating a spectrally broad light field through a macroscopic optical system that contains a nanostructured grating surface. The interest of the simulation is to quantify and optimize the stray light behaviour, which is the limiting factor in modern high end spectrometers. In order to accomplish this we present a simulation scheme that combines a RCWA (rigorous coupled wave analysis) simulation of the grating surface with a selfmade GPU (graphics processor unit) accelerated nonsequential raytracer. Using this, we are able to represent the broad spectrum of the light field as a superposition of many monochromatic raysets and handle the huge raynumber in reasonable time.
Accelerating groundwater flow simulation in MODFLOW using JASMIN-based parallel computing.
Cheng, Tangpei; Mo, Zeyao; Shao, Jingli
2014-01-01
To accelerate the groundwater flow simulation process, this paper reports our work on developing an efficient parallel simulator through rebuilding the well-known software MODFLOW on JASMIN (J Adaptive Structured Meshes applications Infrastructure). The rebuilding process is achieved by designing patch-based data structure and parallel algorithms as well as adding slight modifications to the compute flow and subroutines in MODFLOW. Both the memory requirements and computing efforts are distributed among all processors; and to reduce communication cost, data transfers are batched and conveniently handled by adding ghost nodes to each patch. To further improve performance, constant-head/inactive cells are tagged and neglected during the linear solving process and an efficient load balancing strategy is presented. The accuracy and efficiency are demonstrated through modeling three scenarios: The first application is a field flow problem located at Yanming Lake in China to help design reasonable quantity of groundwater exploitation. Desirable numerical accuracy and significant performance enhancement are obtained. Typically, the tagged program with load balancing strategy running on 40 cores is six times faster than the fastest MICCG-based MODFLOW program. The second test is simulating flow in a highly heterogeneous aquifer. The AMG-based JASMIN program running on 40 cores is nine times faster than the GMG-based MODFLOW program. The third test is a simplified transient flow problem with the order of tens of millions of cells to examine the scalability. Compared to 32 cores, parallel efficiency of 77 and 68% are obtained on 512 and 1024 cores, respectively, which indicates impressive scalability.
Dybeck, Eric Christopher; Plaisance, Craig Patrick; Neurock, Matthew
2017-02-14
A novel algorithm has been developed to achieve temporal acceleration during kinetic Monte Carlo (KMC) simulations of surface catalytic processes. This algorithm allows for the direct simulation of reaction networks containing kinetic processes occurring on vastly disparate timescales which computationally overburden standard KMC methods. Previously developed methods for temporal acceleration in KMC have been designed for specific systems and often require a priori information from the user such as identifying the fast and slow processes. In the approach presented herein, quasi-equilibrated processes are identified automatically based on previous executions of the forward and reverse reactions. Temporal acceleration is achieved by automatically scaling the intrinsic rate constants of the quasi-equilibrated processes, bringing their rates closer to the timescales of the slow kinetically relevant non-equilibrated processes. All reactions are still simulated directly, although with modified rate constants. Abrupt changes in the underlying dynamics of the reaction network are identified during the simulation and the reaction rate constants are rescaled accordingly. The algorithm has been utilized here to model the Fischer-Tropsch synthesis reaction over ruthenium nanoparticles. This reaction network has multiple timescale-disparate processes which would be intractable to simulate without the aid of temporal acceleration. The accelerated simulations are found to give reaction rates and selectivities indistinguishable from those calculated by an equivalent mean-field kinetic model. The computational savings of the algorithm can span many orders of magnitude in realistic systems and the computational cost is not limited by the magnitude of the timescale disparity in the system processes. Furthermore, the algorithm has been designed in a generic fashion and can easily be applied to other surface catalytic processes of interest.
Burlon, Alejandro A.; Valda, Alejandro A.; Girola, Santiago; Minsky, Daniel M.; Kreiner, Andres J.
2010-08-04
In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the {sup 7}Li(p, n){sup 7}Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.
Bush, Karl K; Zavgorodni, Sergei F
2010-12-01
Monte Carlo simulation of clinical treatment plans require, in general, a coordinate transformation to describe the incident radiation field orientation on a patient phantom coordinate system. The International Electrotechnical Commission (IEC) has defined an accelerator coordinate system along with positive directions for gantry, couch and collimator rotations. In order to describe the incident beam's orientation with respect to the patient's coordinate system, DOSXYZnrc simulations often require transformation of the accelerator's gantry, couch and collimator angles to describe the incident beam. Similarly, versions of the voxelized Monte Carlo code (VMC(++)) require non-trivial transformation of the accelerator's gantry, couch and collimator angles to standard Euler angles α, β, γ, to describe an incident phase space source orientation with respect to the patient's coordinate system. The transformations, required by each of these Monte Carlo codes to transport phase spaces through a phantom, have been derived with a rotation operator approach. The transformations have been tested and verified against the Eclipse treatment planning system.
NASA Astrophysics Data System (ADS)
Cassanto, J. M.; Ziserman, H. I.; Chapman, D. K.; Korszun, Z. R.; Todd, P.
Microgravity experiments designed for execution in Get-Away Special canisters, Hitchhiker modules, and Reusable Re-entry Satellites will be subjected to launch and re-entry accelerations. Crew-dependent provisions for preventing acceleration damage to equipment or products will not be available for these payloads during flight; therefore, the effects of launch and re-entry accelerations on all aspects of such payloads must be evaluated prior to flight. A procedure was developed for conveniently simulating the launch and re-entry acceleration profiles of the Space Shuttle (3.3 and 1.7 × g maximum, respectively) and of two versions of NASA's proposed materials research Re-usable Re-entry Satellite (8 × g maximum in one case and 4 × g in the other). By using the 7 m centrifuge of the Gravitational Plant Physiology Laboratory in Philadelphia it was found possible to simulate the time dependence of these 5 different acceleration episodes for payload masses up to 59 kg. A commercial low-cost payload device, the “Materials Dispersion Apparatus” of Instrumentation Technology Associates was tested for (1) integrity of mechanical function, (2) retention of fluid in its compartments, and (3) integrity of products under simulated re-entry g-loads. In particular, the sharp rise from 1 g to maximum g-loading that occurs during re-entry in various unmanned vehicles was successfully simulated, conditions were established for reliable functioning of the MDA, and crystals of 5 proteins suspended in compartments filled with mother liquor were subjected to this acceleration load.
NASA Astrophysics Data System (ADS)
Matsumoto, Yosuke; Amano, Takanobu; Hoshino, Masahiro
2012-08-01
Electron accelerations at high Mach number collisionless shocks are investigated by means of two-dimensional electromagnetic particle-in-cell simulations with various Alfvén Mach numbers, ion-to-electron mass ratios, and the upstream electron β e (the ratio of the thermal pressure to the magnetic pressure). We find electrons are effectively accelerated at a super-high Mach number shock (MA ~ 30) with a mass ratio of M/m = 100 and β e = 0.5. The electron shock surfing acceleration is an effective mechanism for accelerating the particles toward the relativistic regime even in two dimensions with a large mass ratio. Buneman instability excited at the leading edge of the foot in the super-high Mach number shock results in a coherent electrostatic potential structure. While multi-dimensionality allows the electrons to escape from the trapping region, they can interact with the strong electrostatic field several times. Simulation runs in various parameter regimes indicate that the electron shock surfing acceleration is an effective mechanism for producing relativistic particles in extremely high Mach number shocks in supernova remnants, provided that the upstream electron temperature is reasonably low.
NASA Astrophysics Data System (ADS)
Matsumoto, Y.; Amano, T.; Hoshino, M.
2012-12-01
Electron accelerations at high Mach number collision-less shocks are investigated by means of two-dimensional electromagnetic Particle-in-Cell simulations with various Alfven Mach numbers, ion-to-electron mass ratios, and the upstream electron βe (the ratio of the thermal pressure to the magnetic pressure). We found electrons are effectively accelerated at a super-high Mach number shock (MA ~ 30) with a mass ratio of M/m=100 and βe=0.5. The electron shock surfing acceleration is an effective mechanism for accelerating the particles toward the relativistic regime even in two dimensions with the large mass ratio. Buneman instability excited at the leading edge of the foot in the super-high Mach number shock results in a coherent electrostatic potential structure. While multi-dimensionality allows the electrons to escape from the trapping region, they can interact with the strong electrostatic field several times. Simulation runs in various parameter regimes indicate that the electron shock surfing acceleration is an effective mechanism for producing relativistic particles in extremely-high Mach number shocks in supernova remnants, provided that the upstream electron temperature is reasonably low. Matsumoto et al., Astrophys. J., 755, 109, 2012.
Matsumoto, Yosuke; Amano, Takanobu; Hoshino, Masahiro
2012-08-20
Electron accelerations at high Mach number collisionless shocks are investigated by means of two-dimensional electromagnetic particle-in-cell simulations with various Alfven Mach numbers, ion-to-electron mass ratios, and the upstream electron {beta}{sub e} (the ratio of the thermal pressure to the magnetic pressure). We find electrons are effectively accelerated at a super-high Mach number shock (M{sub A} {approx} 30) with a mass ratio of M/m = 100 and {beta}{sub e} = 0.5. The electron shock surfing acceleration is an effective mechanism for accelerating the particles toward the relativistic regime even in two dimensions with a large mass ratio. Buneman instability excited at the leading edge of the foot in the super-high Mach number shock results in a coherent electrostatic potential structure. While multi-dimensionality allows the electrons to escape from the trapping region, they can interact with the strong electrostatic field several times. Simulation runs in various parameter regimes indicate that the electron shock surfing acceleration is an effective mechanism for producing relativistic particles in extremely high Mach number shocks in supernova remnants, provided that the upstream electron temperature is reasonably low.
Werner, Liliana; Abdel-Aziz, Salwa; Peck, Carolee Cutler; Monson, Bryan; Espandar, Ladan; Zaugg, Brian; Stringham, Jack; Wilcox, Chris; Mamalis, Nick
2011-01-01
PURPOSE To assess the long-term biocompatibility and photochromic stability of a new photochromic hydrophobic acrylic intraocular lens (IOL) under extended ultraviolet (UV) light exposure. SETTING John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. DESIGN Experimental study. METHODS A Matrix Aurium photochromic IOL was implanted in right eyes and a Matrix Acrylic IOL without photochromic properties (n = 6) or a single-piece AcrySof Natural SN60AT (N = 5) IOL in left eyes of 11 New Zealand rabbits. The rabbits were exposed to a UV light source of 5 mW/cm2 for 3 hours during every 8-hour period, equivalent to 9 hours a day, and followed for up to 12 months. The photochromic changes were evaluated during slitlamp examination by shining a penlight UV source in the right eye. After the rabbits were humanely killed and the eyes enucleated, study and control IOLs were explanted and evaluated in vitro on UV exposure and studied histopathologically. RESULTS The photochromic IOL was as biocompatible as the control IOLs after 12 months under conditions simulating at least 20 years of UV exposure. In vitro evaluation confirmed the retained optical properties, with photochromic changes observed within 7 seconds of UV exposure. The rabbit eyes had clinical and histopathological changes expected in this model with a 12-month follow-up. CONCLUSIONS The new photochromic IOL turned yellow only on exposure to UV light. The photochromic changes were reversible, reproducible, and stable over time. The IOL was biocompatible with up to 12 months of accelerated UV exposure simulation. PMID:21241924
Becchetti, M; Tian, X; Segars, P; Samei, E
2015-06-15
Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches.
Neural-network accelerated fusion simulation with self-consistent core-pedestal coupling
NASA Astrophysics Data System (ADS)
Meneghini, O.; Candy, J.; Snyder, P. B.; Staebler, G.; Belli, E.
2016-10-01
Practical fusion Whole Device Modeling (WDM) simulations require the ability to perform predictions that are fast, but yet account for the sensitivity of the fusion performance to the boundary constraint that is imposed by the pedestal structure of H-mode plasmas due to the stiff core transport models. This poster presents the development of a set of neural-network (NN) models for the pedestal structure (as predicted by the EPED model), and the neoclassical and turbulent transport fluxes (as predicted by the NEO and TGLF codes, respectively), and their self-consistent coupling within the TGYRO transport code. The results are benchmarked with the ones obtained via the coupling scheme described in [Meneghini PoP 2016]. By substituting the most demanding codes with their NN-accelerated versions, the solution can be found at a fraction of the computation cost of the original coupling scheme, thereby combining the accuracy of a high-fidelity model with the fast turnaround time of a reduced model. Work supported by U.S. DOE DE-FC02-04ER54698 and DE-FG02-95ER54309.
Numerical simulations of recent proton acceleration experiments with sub-100 TW laser systems
NASA Astrophysics Data System (ADS)
Sinigardi, Stefano
2016-09-01
Recent experiments carried out at the Italian National Research Center, National Optics Institute Department in Pisa, are showing interesting results regarding maximum proton energies achievable with sub-100 TW laser systems. While laser systems are being continuously upgraded in laboratories around the world, at the same time a new trend on stabilizing and making ion acceleration results reproducible is growing in importance. Almost all applications require a beam with fixed performance, so that the energy spectrum and the total charge exhibit moderate shot to shot variations. This result is surely far from being achieved, but many paths are being explored in order to reach it. Some of the reasons for this variability come from fluctuations in laser intensity and focusing, due to optics instability. Other variation sources come from small differences in the target structure. The target structure can vary substantially, when it is impacted by the main pulse, due to the prepulse duration and intensity, the shape of the main pulse and the total energy deposited. In order to qualitatively describe the prepulse effect, we will present a two dimensional parametric scan of its relevant parameters. A single case is also analyzed with a full three dimensional simulation, obtaining reasonable agreement between the numerical and the experimental energy spectrum.
The GENGA code: gravitational encounters in N-body simulations with GPU acceleration
Grimm, Simon L.; Stadel, Joachim G.
2014-11-20
We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.
Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Jiang, Xiao-Guo
2012-09-01
To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor S.
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach was evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques G.; Uberuaga, B. P.; Voter, A. F.
2007-11-01
We present a method for performing parallel temperature-accelerated dynamics (TAD) simulations over extended length scales. In our method, a two-dimensional spatial decomposition is used along with the recently proposed semirigorous synchronous sublattice algorithm of Shim and Amar [Phys. Rev. B 71, 125432 (2005)]. The scaling behavior of the simulation time as a function of system size is studied and compared with serial TAD in simulations of the early stages of Cu/Cu(100) growth as well as for a simple case of surface relaxation. In contrast to the corresponding serial TAD simulations, for which the simulation time tser increases as a power of the system size N (tser˜Nx) with an exponent x that can be as large as three, in our parallel simulations the simulation time increases only logarithmically with system size. As a result, even for relatively small system sizes our parallel TAD simulations are significantly faster than the corresponding serial TAD simulations. The significantly improved scaling behavior of our parallel TAD simulations over the corresponding serial simulations indicates that our parallel TAD method may be useful in performing simulations over significantly larger length scales than serial TAD, while preserving all the atomistic details provided by the TAD method.
Community Project for Accelerator Science and Simulation (ComPASS) Final Report
Cary, John R.; Cowan, Benjamin M.; Veitzer, S. A.
2016-03-04
Tech-X participated across the full range of ComPASS activities, with efforts in the Energy Frontier primarily through modeling of laser plasma accelerators and dielectric laser acceleration, in the Intensity Frontier primarily through electron cloud modeling, and in Uncertainty Quantification being applied to dielectric laser acceleration. In the following we present the progress and status of our activities for the entire period of the ComPASS project for the different areas of Energy Frontier, Intensity Frontier and Uncertainty Quantification.
Design and Simulation of IOTA - a Novel Concept of Integrable Optics Test Accelerator
Nagaitsev, S.; Valishev, A.; Danilov, V.V.; Shatilov, D.N.; /Novosibirsk, IYF
2012-05-01
The use of nonlinear lattices with large betatron tune spreads can increase instability and space charge thresholds due to improved Landau damping. Unfortunately, the majority of nonlinear accelerator lattices turn out to be nonintegrable, producing chaotic motion and a complex network of stable and unstable resonances. Recent advances in finding the integrable nonlinear accelerator lattices have led to a proposal to construct at Fermilab a test accelerator with strong nonlinear focusing which avoids resonances and chaotic particle motion. This presentation will outline the main challenges, theoretical design solutions and construction status of the Integrable Optics Test Accelerator (IOTA) underway at Fermilab.
Magnetogasdynamic compression of a coaxial plasma accelerator flow for micrometeoroid simulation
NASA Technical Reports Server (NTRS)
Igenbergs, E. B.; Shriver, E. L.
1974-01-01
A new configuration of a coaxial plasma accelerator with self-energized magnetic compressor coil attached is described. It is shown that the circuit may be treated theoretically by analyzing an equivalent circuit mesh. The results obtained from the theoretical analysis compare favorably with the results measured experimentally. Using this accelerator configuration, glass beads of 125 micron diameter were accelerated to velocities as high as 11 kilometers per second, while 700 micron diameter glass beads were accelerated to velocities as high as 5 kilometers per second. The velocities are within the hypervelocity regime of meteoroids.
GEANT4 simulations for beam emittance in a linear collider based on plasma wakefield acceleration
Mete, O. Xia, G.; Hanahoe, K.; Labiche, M.
2015-08-15
Alternative acceleration technologies are currently under development for cost-effective, robust, compact, and efficient solutions. One such technology is plasma wakefield acceleration, driven by either a charged particle or laser beam. However, the potential issues must be studied in detail. In this paper, the emittance evolution of a witness beam through elastic scattering from gaseous media and under transverse focusing wakefields is studied.
Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs
NASA Astrophysics Data System (ADS)
Niemeyer, Kyle E.; Sung, Chih-Jen
2014-01-01
The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.
Kalyaanamoorthy, Subha; Chen, Yi-Ping Phoebe
2012-02-27
Molecular channel exploration perseveres to be the prominent solution for eliciting structure and accessibility of active site and other internal spaces of macromolecules. The volume and silhouette characterization of these channels provides answers for the issues of substrate access and ligand swapping between the obscured active site and the exterior of the protein. Histone deacetylases (HDACs) are metal-dependent enzymes that are involved in the cell growth, cell cycle regulation, and progression, and their deregulations have been linked with different types of cancers. Hence HDACs, especially the class I family, are widely recognized as the important cancer targets, and the characterizations of their structures and functions have been of special interest in cancer drug discovery. The class I HDACs are known to possess two different protein channels, an 11 Å and a 14 Å (named channels A and B1, respectively), of which the former is a ligand or substrate occupying tunnel that leads to the buried active site zinc ion and the latter is speculated to be involved in product release. In this work, we have carried out random acceleration molecular dynamics (RAMD) simulations coupled with the classical molecular dynamics to explore the release of the ligand, N-(2-aminophenyl) benzamide (LLX) from the active sites of the recently solved X-ray crystal structure of HDAC2 and the computationally modeled HDAC1 proteins. The RAMD simulations identified significant structural and dynamic features of the HDAC channels, especially the key 'gate-keeping' amino acid residues that control these channels and the ligand release events. Further, this study identified a novel and unique channel B2, a subchannel from channel B1, in the HDAC1 protein structure. The roles of water molecules in the LLX release from the HDAC1 and HDAC2 enzymes are also discussed. Such structural and dynamic properties of the HDAC protein channels that govern the ligand escape reactions will provide
Matsuda, K.; Terada, N.; Katoh, Y.; Misawa, H.
2011-08-15
There has been a great concern about the origin of the parallel electric field in the frame of fluid equations in the auroral acceleration region. This paper proposes a new method to simulate magnetohydrodynamic (MHD) equations that include the electron convection term and shows its efficiency with simulation results in one dimension. We apply a third-order semi-discrete central scheme to investigate the characteristics of the electron convection term including its nonlinearity. At a steady state discontinuity, the sum of the ion and electron convection terms balances with the ion pressure gradient. We find that the electron convection term works like the gradient of the negative pressure and reduces the ion sound speed or amplifies the sound mode when parallel current flows. The electron convection term enables us to describe a situation in which a parallel electric field and parallel electron acceleration coexist, which is impossible for ideal or resistive MHD.
Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing
2014-01-15
Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large
NASA Astrophysics Data System (ADS)
Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing
2014-01-01
Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large-sized system.
Bacterial cells enhance laser driven ion acceleration
Dalui, Malay; Kundu, M.; Trivikram, T. Madhu; Rajeev, R.; Ray, Krishanu; Krishnamurthy, M.
2014-01-01
Intense laser produced plasmas generate hot electrons which in turn leads to ion acceleration. Ability to generate faster ions or hotter electrons using the same laser parameters is one of the main outstanding paradigms in the intense laser-plasma physics. Here, we present a simple, albeit, unconventional target that succeeds in generating 700 keV carbon ions where conventional targets for the same laser parameters generate at most 40 keV. A few layers of micron sized bacteria coating on a polished surface increases the laser energy coupling and generates a hotter plasma which is more effective for the ion acceleration compared to the conventional polished targets. Particle-in-cell simulations show that micro-particle coated target are much more effective in ion acceleration as seen in the experiment. We envisage that the accelerated, high-energy carbon ions can be used as a source for multiple applications. PMID:25102948
Hanson, David E
2011-08-07
Based on recent molecular dynamics and ab initio simulations of small isoprene molecules, we propose a new ansatz for rubber elasticity. We envision a network chain as a series of independent molecular kinks, each comprised of a small number of backbone units, and the strain as being imposed along the contour of the chain. We treat chain extension in three distinct force regimes: (Ia) near zero strain, where we assume that the chain is extended within a well defined tube, with all of the kinks participating simultaneously as entropic elastic springs, (II) when the chain becomes sensibly straight, giving rise to a purely enthalpic stretching force (until bond rupture occurs) and, (Ib) a linear entropic regime, between regimes Ia and II, in which a force limit is imposed by tube deformation. In this intermediate regime, the molecular kinks are assumed to be gradually straightened until the chain becomes a series of straight segments between entanglements. We assume that there exists a tube deformation tension limit that is inversely proportional to the chain path tortuosity. Here we report the results of numerical simulations of explicit three-dimensional, periodic, polyisoprene networks, using these extension-only force models. At low strain, crosslink nodes are moved affinely, up to an arbitrary node force limit. Above this limit, non-affine motion of the nodes is allowed to relax unbalanced chain forces. Our simulation results are in good agreement with tensile stress vs. strain experiments.
Navier-Stokes simulation of the supersonic combustion flowfield in a ram accelerator
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1991-01-01
A computational study of the ram accelerator, a ramjet-in-tube device for accelerating projectiles to ultrahigh velocities, is presented. The analysis is performed using a fully implicit TVD scheme that efficiently solves the Reynolds-averaged Navier-Stokes equations and the species continuity equations associated with a finite rate combustion model. Previous analyses of this concept were based on inviscid assumptions. The present results indicate that viscous effects are of primary importance; in all the cases studied, shock-induced combustion always started in the boundary layer. The effects of Mach number, mixture composition, pressure, and turbulence are investigated for various configurations. Two types of combustion processes, one stable and the other unstable, were observed depending on the inflow conditions. In the unstable case, a detonation wave is formed, which propagates upstream and unstarts the ram accelerator. In the stable case, a solution that converges to steady-state is obtained, in which the combustion wave remains stationary with respect to the ram accelerator projectile. The possibility of stabilizing the detonation wave by means of a backward facing step is also investigated. In addition to these studies, two numerical techniques were tested. These two techniques are vector extrapolation to accelerate convergence, and a diagonal formulation that eliminates the expense of inverting large block matrices that arise in chemically reacting flows.
Navier-Stokes simulation of the supersonic combustion flowfield in a ram accelerator
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1991-01-01
A computational study of the ram accelerator, a ramjet-in-tube device for accelerating projectiles to ultrahigh velocities, is presented. The analysis is performed using a fully implicit TVD scheme that efficiently solves the Reynolds-averaged Navier-Stokes equations and the species continuity equations associated with a finite rate combustion model. The present results indicate that viscous effects are of primary importance in all the cases studied, shock-induced combustion always started in the boundary layer. The effects of Mach number, mixture composition, pressure and tubulence are investigated for various configurations. Two types of combustion processes, one stable and the other unstable, were observed depending on the inflow conditions. The possibility of stabilizing the detonation wave by means of a backward facing step is also investigated. Two numerical techniques were tested: vector extrapolation, to accelerate convergence, and a diagonal formulation that eliminates the expense of inverting large block matrices which arise in chemically reacting flows.
Han, Tao; Das, Diganta Bhusan
2015-06-01
Microneedle (MN) is a relatively recent invention and an efficient technology for transdermal drug delivery (TDD). Conventionally, mathematical models of MNs drug delivery define the shape of the holes created by the MNs in the skin as the same as their actual geometry. Furthermore, the size of the MN holes in the skin is considered to be either the same or a certain fraction of the length of the MNs. However, the histological images of the MN-treated skin indicate that the real insertion depth is much shorter than the length of the MNs and the shapes may vary significantly from one case to another. In addressing these points, we propose a new approach for modeling MN-based drug delivery, which incorporates the histology of MN-pierced skin using a number of concepts borrowed from image processing tools. It is expected that the developed approach will provide better accuracy of the drug diffusion profile. A new computer program is developed to automatically obtain the outline of the MNs-treated holes and import these images into computer software for simulation of drug diffusion from MN systems. This method can provide a simple and fast way to test the quality of MNs design and modeling, as well as simulate experimental studies, for example, permeation experiments on MN-pierced skin using diffusion cell. The developed methodology is demonstrated using two-dimensional (2D) numerical modeling of flat MNs (2D). However, the methodology is general and can be implemented for three dimensional (3D) MNs if there is sufficient number of images for reconstructing a 3D image for numerical simulation. Numerical modeling for 3D geometry is demonstrated by using images of an ideal 3D MN. The methodology is not demonstrated for real 3D MNs, as there are not sufficient numbers of images for the purpose of this paper.
Sinitsyn, Oleksandr; Nusinovich, Gregory; Antonsen, Thomas Jr.
2010-11-04
In this paper new results of numerical studies of multipactor in dielectric-loaded accelerator structures are presented. The results are compared with experimental data obtained during recent studies of such structures performed by Argonne National Laboratory, the Naval Research Laboratory, SLAC National Accelerator Laboratory and Euclid TechLabs, LLC. Good agreement between the theory and experiment was observed for the structures with larger inner diameter, however the structures with smaller inner diameter demonstrated a discrepancy between the two. Possible reasons for such discrepancy are discussed.
Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.
2014-10-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy.
NASA Astrophysics Data System (ADS)
Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.
2014-10-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy.
Wu, Sheng; Li, Hong; Petzold, Linda R.
2015-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185
Vay, J; Fawley, W M; Geddes, C G; Cormier-Michel, E; Grote, D P
2009-05-05
It has been shown that the ratio of longest to shortest space and time scales of a system of two or more components crossing at relativistic velocities is not invariant under Lorentz transformation. This implies the existence of a frame of reference minimizing an aggregate measure of the ratio of space and time scales. It was demonstrated that this translated into a reduction by orders of magnitude in computer simulation run times, using methods based on first principles (e.g., Particle-In-Cell), for particle acceleration devices and for problems such as: free electron laser, laser-plasma accelerator, and particle beams interacting with electron clouds. Since then, speed-ups ranging from 75 to more than four orders of magnitude have been reported for the simulation of either scaled or reduced models of the above-cited problems. In it was shown that to achieve full benefits of the calculation in a boosted frame, some of the standard numerical techniques needed to be revised. The theory behind the speed-up of numerical simulation in a boosted frame, latest developments of numerical methods, and example applications with new opportunities that they offer are all presented.
NASA Astrophysics Data System (ADS)
Nejad, Marjan A.; Mücksch, Christian; Urbassek, Herbert M.
2017-02-01
Adsorption of insulin on polar and nonpolar surfaces of crystalline SiO2 (cristobalite and α -quartz) is studied using molecular dynamics simulation. Acceleration techniques are used in order to sample adsorption phase space efficiently and to identify realistic adsorption conformations. We find major differences between the polar and nonpolar surfaces. Electrostatic interactions govern the adsorption on polar surfaces and can be described by the alignment of the protein dipole with the surface dipole; hence spreading of the protein on the surface is irrelevant. On nonpolar surfaces, on the other hand, van-der-Waals interaction dominates, inducing surface spreading of the protein.
Magneto-hydrodynamics simulation study of deflagration mode in co-axial plasma accelerators
NASA Astrophysics Data System (ADS)
Sitaraman, Hariswaran; Raja, Laxminarayan L.
2014-01-01
Experimental studies by Poehlmann et al. [Phys. Plasmas 17(12), 123508 (2010)] on a coaxial electrode magnetohydrodynamic (MHD) plasma accelerator have revealed two modes of operation. A deflagration or stationary mode is observed for lower power settings, while higher input power leads to a detonation or snowplow mode. A numerical modeling study of a coaxial plasma accelerator using the non-ideal MHD equations is presented. The effect of plasma conductivity on the axial distribution of radial current is studied and found to agree well with experiments. Lower conductivities lead to the formation of a high current density, stationary region close to the inlet/breech, which is a characteristic of the deflagration mode, while a propagating current sheet like feature is observed at higher conductivities, similar to the detonation mode. Results confirm that plasma resistivity, which determines magnetic field diffusion effects, is fundamentally responsible for the two modes.
Magneto-hydrodynamics simulation study of deflagration mode in co-axial plasma accelerators
Sitaraman, Hariswaran; Raja, Laxminarayan L.
2014-01-15
Experimental studies by Poehlmann et al. [Phys. Plasmas 17(12), 123508 (2010)] on a coaxial electrode magnetohydrodynamic (MHD) plasma accelerator have revealed two modes of operation. A deflagration or stationary mode is observed for lower power settings, while higher input power leads to a detonation or snowplow mode. A numerical modeling study of a coaxial plasma accelerator using the non-ideal MHD equations is presented. The effect of plasma conductivity on the axial distribution of radial current is studied and found to agree well with experiments. Lower conductivities lead to the formation of a high current density, stationary region close to the inlet/breech, which is a characteristic of the deflagration mode, while a propagating current sheet like feature is observed at higher conductivities, similar to the detonation mode. Results confirm that plasma resistivity, which determines magnetic field diffusion effects, is fundamentally responsible for the two modes.
Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods
Luskin, Mitchell; James, Richard; Tadmor, Ellad
2014-03-30
This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.
Simulation of the laser acceleration experiment at the Fermilab/NICADD photoinjector laboratory
Piot, P.; Tikhoplav, R.; Melissinos, A.C.; /Rochester U.
2005-05-01
The possibility of using laser beam to accelerate electrons in a waveguide structure with dimension much larger than the laser wavelength was proposed by Pantel and analytically investigated by Xie. In the present paper we present the status of our experimental plan to demonstrate the laser/e{sup -} interaction using an e{sup -} beam with initial energy of 40-50 MeV.
NASA Astrophysics Data System (ADS)
Zhang, Yongfeng; Jiang, Chao; Bai, Xianming
2017-01-01
This report presents an accelerated kinetic Monte Carlo (KMC) method to compute the diffusivity of hydrogen in hcp metals and alloys, considering both thermally activated hopping and quantum tunneling. The acceleration is achieved by replacing regular KMC jumps in trapping energy basins formed by neighboring tetrahedral interstitial sites, with analytical solutions for basin exiting time and probability. Parameterized by density functional theory (DFT) calculations, the accelerated KMC method is shown to be capable of efficiently calculating hydrogen diffusivity in α-Zr and Zircaloy, without altering the kinetics of long-range diffusion. Above room temperature, hydrogen diffusion in α-Zr and Zircaloy is dominated by thermal hopping, with negligible contribution from quantum tunneling. The diffusivity predicted by this DFT + KMC approach agrees well with that from previous independent experiments and theories, without using any data fitting. The diffusivity along
Anisotropic hydrogen diffusion in α-Zr and Zircaloy predicted by accelerated kinetic Monte Carlo simulations
Zhang, Yongfeng; Jiang, Chao; Bai, Xianming
2017-01-01
This report presents an accelerated kinetic Monte Carlo (KMC) method to compute the diffusivity of hydrogen in hcp metals and alloys, considering both thermally activated hopping and quantum tunneling. The acceleration is achieved by replacing regular KMC jumps in trapping energy basins formed by neighboring tetrahedral interstitial sites, with analytical solutions for basin exiting time and probability. Parameterized by density functional theory (DFT) calculations, the accelerated KMC method is shown to be capable of efficiently calculating hydrogen diffusivity in α-Zr and Zircaloy, without altering the kinetics of long-range diffusion. Above room temperature, hydrogen diffusion in α-Zr and Zircaloy is dominated by thermal hopping, with negligible contribution from quantum tunneling. The diffusivity predicted by this DFT + KMC approach agrees well with that from previous independent experiments and theories, without using any data fitting. The diffusivity along
Simulation of Tunable Infra-Red Free-Electron Laser Based on Test Linac of Pohang Accelerator Laboratory
NASA Astrophysics Data System (ADS)
Kim, Hyoung Suk; Hahn, Sang June; Lee, Jae Koo
1993-07-01
We have investigated the possibility of tunable Infrared Free-Electron Laser with Test Linac of Pohang Accelerator Laboratory through one dimensional simulation which includes energy spread and space charge effects and 3-D particle simulation that ignores space charge force but takes into account the energy spread and the emittance of the electron beam and the diffraction of the electromagnetic wave. The enhanced current density of the Test Linac makes it feasible to amplify the 4.2 kW signal of 10.6 μm radiation to 200 MW level. Extending the design parameters, electron beam energy (20˜60 MeV), wiggler field strength (around 3 kG), and radiation wavelength (10˜90 μm), we have revealed the requisites for the design of the tunable radiation source and the expected gains in that frequency range. It is shown to generate more than 100 MW in the tunable range.
Cerutti, B.; Werner, G. R.; Uzdensky, D. A.; Begelman, M. C. E-mail: greg.werner@colorado.edu E-mail: mitch@jila.colorado.edu
2013-06-20
It is generally accepted that astrophysical sources cannot emit synchrotron radiation above 160 MeV in their rest frame. This limit is given by the balance between the accelerating electric force and the radiation reaction force acting on the electrons. The discovery of synchrotron gamma-ray flares in the Crab Nebula, well above this limit, challenges this classical picture of particle acceleration. To overcome this limit, particles must accelerate in a region of high electric field and low magnetic field. This is possible only with a non-ideal magnetohydrodynamic process, like magnetic reconnection. We present the first numerical evidence of particle acceleration beyond the synchrotron burnoff limit, using a set of two-dimensional particle-in-cell simulations of ultra-relativistic pair plasma reconnection. We use a new code, Zeltron, that includes self-consistently the radiation reaction force in the equation of motion of the particles. We demonstrate that the most energetic particles move back and forth across the reconnection layer, following relativistic Speiser orbits. These particles then radiate >160 MeV synchrotron radiation rapidly, within a fraction of a full gyration, after they exit the layer. Our analysis shows that the high-energy synchrotron flux is highly variable in time because of the strong anisotropy and inhomogeneity of the energetic particles. We discover a robust positive correlation between the flux and the cut-off energy of the emitted radiation, mimicking the effect of relativistic Doppler amplification. A strong guide field quenches the emission of >160 MeV synchrotron radiation. Our results are consistent with the observed properties of the Crab flares, supporting the reconnection scenario.
Monte Carlo simulation of electron beams from an accelerator head using PENELOPE
NASA Astrophysics Data System (ADS)
Sempau, J.; Sánchez-Reyes, A.; Salvat, F.; Oulad ben Tahar, H.; Jiang, S. B.; Fernández-Varea, J. M.
2001-04-01
The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the `latent' variance in the phase-space file, are discussed in detail.
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
Numerical Simulation of Laser-driven In-Tube Accelerator on Supersonic Condition
NASA Astrophysics Data System (ADS)
Kim, Sukyum; Jeung, In-Seuck; Choi, Jeong-Yeol
2004-03-01
Recently, several laser propulsion vehicles have been launched successfully. But these vehicles remained in a very low subsonic flight. Laser-driven In-Tube Accelerator (LITA) is developed as unique laser propulsion system at Tohoku University. In this paper, flow characteristics and momentum coupling coefficients are studied numerically in the supersonic condition with the same configuration of LITA. Because of the aerodynamic drag, the coupling coefficient could not get correctly especially at the low energy input. In this study, the coupling coefficient was calculated using the concept of the effective impulse.
Simulated 2D vs. 3D Shock Waves: Implications for Particle Acceleration
Jones, Frank C.
2005-08-01
We have given a rigorous derivation of a theorem showing that charged particles in an arbitrary electromagnetic field with at least one ignorable spatial coordinate remain forever tied to a given magnetic-field line. Such a situation contrasts the significant motions normal to the magnetic field that are expected in most real three-dimensional systems. While the significance of the theorem was not widely appreciated until recently, it has important consequences for a number of problems and is of particular relevance for the acceleration of cosmic rays by shocks.
Numerical Simulation of Laser-driven In-Tube Accelerator on Supersonic Condition
Kim, Sukyum; Jeung, In-Seuck; Choi, Jeong-Yeol
2004-03-30
Recently, several laser propulsion vehicles have been launched successfully. But these vehicles remained in a very low subsonic flight. Laser-driven In-Tube Accelerator (LITA) is developed as unique laser propulsion system at Tohoku University. In this paper, flow characteristics and momentum coupling coefficients are studied numerically in the supersonic condition with the same configuration of LITA. Because of the aerodynamic drag, the coupling coefficient could not get correctly especially at the low energy input. In this study, the coupling coefficient was calculated using the concept of the effective impulse.
Test simulation of neutron damage to electronic components using accelerator facilities
NASA Astrophysics Data System (ADS)
King, D. B.; Fleming, R. M.; Bielejec, E. S.; McDonald, J. K.; Vizkelethy, G.
2015-12-01
The purpose of this work is to demonstrate equivalent bipolar transistor damage response to neutrons and silicon ions. We report on irradiation tests performed at the White Sands Missile Range Fast Burst Reactor, the Sandia National Laboratories (SNL) Annular Core Research Reactor, the SNL SPHINX accelerator, and the SNL Ion Beam Laboratory using commercial silicon npn bipolar junction transistors (BJTs) and III-V Npn heterojunction bipolar transistors (HBTs). Late time and early time gain metrics as well as defect spectra measurements are reported.
Accelerating atomic-level protein simulations by flat-histogram techniques
NASA Astrophysics Data System (ADS)
Jónsson, SigurÃ°ur Ć.; Mohanty, Sandipan; Irbäck, Anders
2011-09-01
Flat-histogram techniques provide a powerful approach to the simulation of first-order-like phase transitions and are potentially very useful for protein studies. Here, we test this approach by implicit solvent all-atom Monte Carlo (MC) simulations of peptide aggregation, for a 7-residue fragment (GIIFNEQ) of the Cu/Zn superoxide dismutase 1 protein (SOD1). In simulations with 8 chains, we observe two distinct aggregated/non-aggregated phases. At the midpoint temperature, these phases coexist, separated by a free-energy barrier of height 2.7 kBT. We show that this system can be successfully studied by carefully implemented flat-histogram techniques. The frequency of barrier crossing, which is low in conventional canonical simulations, can be increased by turning to a two-step procedure based on the Wang-Landau and multicanonical algorithms.
Accelerating the Customer-Driven Microgrid Through Real-Time Digital Simulation
I. Leonard; T. Baldwin; M. Sloderbeck
2009-07-01
Comprehensive design and testing of realistic customer-driven microgrids requires a high performance simulation platform capable of incorporating power system and control models with external hardware systems. Traditional non real-time simulation is unable to fully capture the level of detail necessary to expose real-world implementation issues. With a real-time digital simulator as its foundation, a high-fidelity simulation environment that includes a robust electrical power system model, advanced control architecture, and a highly adaptable communication network is introduced. Hardware-in-the-loop implementation approaches for the hardware-based control and communication systems are included. An overview of the existing power system model and its suitability for investigation of autonomous island formation within the microgrid is additionally presented. Further test plans are also documented.
Slaba, Tony C; Blattnig, Steve R; Norbury, John W; Rusek, Adam; La Tessa, Chiara
2016-02-01
The galactic cosmic ray (GCR) simulator at the NASA Space Radiation Laboratory (NSRL) is intended to deliver the broad spectrum of particles and energies encountered in deep space to biological targets in a controlled laboratory setting. In this work, certain aspects of simulating the GCR environment in the laboratory are discussed. Reference field specification and beam selection strategies at NSRL are the main focus, but the analysis presented herein may be modified for other facilities and possible biological considerations. First, comparisons are made between direct simulation of the external, free space GCR field and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at NSRL limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, variation in the induced tissue field associated with shielding configuration and solar activity is addressed. It is found that the observed variation is likely within the uncertainty associated with representing any GCR reference field with discrete ion beams in the laboratory, given current facility constraints. A single reference field for deep space missions is subsequently identified. Third, a preliminary approach for selecting beams at NSRL to simulate the designated reference field is presented. This approach is not a final design for the GCR simulator, but rather a single step within a broader design strategy. It is shown that the beam selection methodology is tied directly to the reference environment, allows facility constraints to be incorporated, and may be adjusted to account for additional constraints imposed by biological or animal care considerations. The major biology questions are not addressed herein but are discussed in a companion paper published in the present issue of this journal. Drawbacks of the proposed methodology are discussed
NASA Astrophysics Data System (ADS)
Slaba, Tony C.; Blattnig, Steve R.; Norbury, John W.; Rusek, Adam; La Tessa, Chiara
2016-02-01
The galactic cosmic ray (GCR) simulator at the NASA Space Radiation Laboratory (NSRL) is intended to deliver the broad spectrum of particles and energies encountered in deep space to biological targets in a controlled laboratory setting. In this work, certain aspects of simulating the GCR environment in the laboratory are discussed. Reference field specification and beam selection strategies at NSRL are the main focus, but the analysis presented herein may be modified for other facilities and possible biological considerations. First, comparisons are made between direct simulation of the external, free space GCR field and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at NSRL limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, variation in the induced tissue field associated with shielding configuration and solar activity is addressed. It is found that the observed variation is likely within the uncertainty associated with representing any GCR reference field with discrete ion beams in the laboratory, given current facility constraints. A single reference field for deep space missions is subsequently identified. Third, a preliminary approach for selecting beams at NSRL to simulate the designated reference field is presented. This approach is not a final design for the GCR simulator, but rather a single step within a broader design strategy. It is shown that the beam selection methodology is tied directly to the reference environment, allows facility constraints to be incorporated, and may be adjusted to account for additional constraints imposed by biological or animal care considerations. The major biology questions are not addressed herein but are discussed in a companion paper published in the present issue of this journal. Drawbacks of the proposed methodology are discussed
Numerical simulation of ions acceleration and extraction in cyclotron DC-110
NASA Astrophysics Data System (ADS)
Samsonov, E. V.; Gikal, B. N.; Borisov, O. N.; Ivanenko, I. A.
2014-03-01
In Flerov's Laboratory of Nuclear Reactions of JINR in the framework of project "Beta" a cyclotron complex for a wide range of applied research in nanotechnology (track membranes, surface modification, etc.) is created. The complex includes a dedicated heavy-ion cyclotron DC-110, which yields intense beams of accelerated ions Ar, Kr and Xe with a fixed energy of 2.5 MeV/A. The cyclotron is equipped with external injection on the base of ECR ion source, a spiral inflector and the system of ions extraction consisting of an electrostatic deflector and a passive magnetic channel. The results of calculations of the beam dynamics in measured magnetic field from the exit of spiral inflector to correcting magnet located outside the accelerator vacuum chamber are presented. It is shown that the design parameters of ion beams at the entrance of correcting magnet will be obtained using false channel, which is a copy of the passive channel, located on the opposite side of the magnetic system. Extraction efficiency of ions will reach 75%.
Acceleration of plasma flows in the closed magnetic fields: Simulation and analysis
Mahajan, Swadesh M.; Shatashvili, Nana L.; Mikeladze, Solomon V.; Sigua, Ketevan I.
2006-06-15
Within the framework of a two-fluid description, possible pathways for the generation of fast flows (dynamical as well as steady) in the closed magnetic fields are established. It is shown that a primary plasma flow (locally sub-Alfvenic) is accelerated while interacting with ambient arcade-like closed field structures. The time scale for creating reasonably fast flows (> or approx. 100 km/s) is dictated by the initial ion skin depth, while the amplification of the flow depends on local plasma {beta}. It is shown that distances over which the flows become 'fast' are {approx}0.01R{sub 0} from the interaction surface (R{sub 0} being a characteristic length of the system); later, the fast flow localizes (with dimensions < or approx. 0.05R{sub 0}) in the upper central region of the original arcade. For fixed initial temperature, the final speed (> or approx. 500 km/s) of the accelerated flow and the modification of the field structure are independent of the time duration (lifetime) of the initial flow. In the presence of dissipation, these flows are likely to play a fundamental role in the heating of the finely structured stellar atmospheres; their relevance to the solar wind is also obvious.
NASA Astrophysics Data System (ADS)
Etchebers, O.; Kedziorek, M. A.; Chossat, J.; Riou, C.; Bourg, A. C.
2003-12-01
A common way to dispose of sewage sludge is to spead it on agricultural land because of its high nutrient (P, N) and org C contents. However, in addition to these beneficial components, sewage sludge can contain toxic chemicals such as heavy metals. This farming technique is relatively recent (several decades, at most) and there is still a need for information concerning the processes controlling the fate of the heavy metals in the sludge. To study how fast they migrate in the soil profile, the transfer of water and associated solutes in both unsaturated and unsaturated conditions can be accelerated by centrifugation according to the equation: tsimulated = treal * g2. (t: time). In a lysimeter study (diameter 30 cm, depth 60 cm) carried out using the CEA-CESTA Silat 265 centrifuge, we simulated, at 20 g, several months of percolation in one day. Experiments were done on cores of sandy forest soil (podzol) to which various sewage sludges (containing 2 to 12 mg/kg Cd, 20 to 120 mg/kg Ni, 50 to 465 mg/kg Pb) and simulated rain were applied. Major ions migrated at an estimated rate of 6-8.5 mm/simulated day (2-3 m/simulated year), while heavy metals (Cd, Ni, Pb) were retarded by a factor of 1.5 to 2. The retention of these heavy metals is associated with the organic C content of the soil profile (rich in the upper horizon).
GPU-accelerated Direct Sampling method for multiple-point statistical simulation
NASA Astrophysics Data System (ADS)
Huang, Tao; Li, Xue; Zhang, Ting; Lu, De-Tang
2013-08-01
Geostatistical simulation techniques have become a widely used tool for the modeling of oil and gas reservoirs and the assessment of uncertainty. The Direct Sampling (DS) algorithm is a recent multiple-point statistical simulation technique. It directly samples the training image (TI) during the simulation process by calculating distances between the TI patterns and the given data events found in the simulation grid (SG). Omitting the prior storage of all the TI patterns in a database, the DS algorithm can be used to simulate categorical, continuous and multivariate variables. Three fundamental input parameters are required for the definition of DS applications: the number of neighbors n, the acceptance threshold t and the fraction of the TI to scan f. For very large grids and complex spatial models with more severe parameter restrictions, the computational costs in terms of simulation time often become the bottleneck of practical applications. This paper focuses on an innovative implementation of the Direct Sampling method which exploits the benefits of graphics processing units (GPUs) to improve computational performance. Parallel schemes are applied to deal with two of the DS input parameters, n and f. Performance tests are carried out with large 3D grid size and the results are compared with those obtained based on the simulations with central processing units (CPU). The comparison indicates that the use of GPUs reduces the computation time by a factor of 10X-100X depending on the input parameters. Moreover, the concept of the search ellipsoid can be conveniently combined with the flexible data template of the DS method, and our experimental results of sand channels reconstruction show that it can improve the reproduction of the long-range connectivity patterns.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin
2016-04-07
Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.
NASA Astrophysics Data System (ADS)
Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin
2016-04-01
Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.
On the use of reverse Brownian motion to accelerate hybrid simulations
NASA Astrophysics Data System (ADS)
Bakarji, Joseph; Tartakovsky, Daniel M.
2017-04-01
Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategies for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.
NASA Astrophysics Data System (ADS)
Varma, Vidya; Prange, Matthias; Schulz, Michael
2016-11-01
Numerical simulations provide a considerable aid in studying past climates. Out of the various approaches taken in designing numerical climate experiments, transient simulations have been found to be the most optimal when it comes to comparison with proxy data. However, multi-millennial or longer simulations using fully coupled general circulation models are computationally very expensive such that acceleration techniques are frequently applied. In this study, we compare the results from transient simulations of the present and the last interglacial with and without acceleration of the orbital forcing, using the comprehensive coupled climate model CCSM3 (Community Climate System Model version 3). Our study shows that in low-latitude regions, the simulation of long-term variations in interglacial surface climate is not significantly affected by the use of the acceleration technique (with an acceleration factor of 10) and hence, large-scale model-data comparison of surface variables is not hampered. However, in high-latitude regions where the surface climate has a direct connection to the deep ocean, e.g. in the Southern Ocean or the Nordic Seas, acceleration-induced biases in sea-surface temperature evolution may occur with potential influence on the dynamics of the overlying atmosphere.
GPU-accelerated molecular dynamics simulation for study of liquid crystalline flows
NASA Astrophysics Data System (ADS)
Sunarso, Alfeus; Tsuji, Tomohiro; Chono, Shigeomi
2010-08-01
We have developed a GPU-based molecular dynamics simulation for the study of flows of fluids with anisotropic molecules such as liquid crystals. An application of the simulation to the study of macroscopic flow (backflow) generation by molecular reorientation in a nematic liquid crystal under the application of an electric field is presented. The computations of intermolecular force and torque are parallelized on the GPU using the cell-list method, and an efficient algorithm to update the cell lists was proposed. Some important issues in the implementation of computations that involve a large number of arithmetic operations and data on the GPU that has limited high-speed memory resources are addressed extensively. Despite the relatively low GPU occupancy in the calculation of intermolecular force and torque, the computation on a recent GPU is about 50 times faster than that on a single core of a recent CPU, thus simulations involving a large number of molecules using a personal computer are possible. The GPU-based simulation should allow an extensive investigation of the molecular-level mechanisms underlying various macroscopic flow phenomena in fluids with anisotropic molecules.
Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder
Morgan, B. E.; Greenough, J. A.
2015-04-08
Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the timemore » at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.« less
Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder
Morgan, B. E.; Greenough, J. A.
2015-04-08
Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the time at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.
Kuwahara, Hiroyuki; Myers, Chris J
2008-09-01
Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency.
Modeling of 10 GeV-1 TeV laser-plasma accelerators using Lorentz boosted simulations
Vay, J. -L.; Geddes, C. G. R.; Esarey, E.; Schroeder, C. B.; Leemans, W. P.; Cormier-Michel, E.; Grote, D. P.
2011-12-13
We study modeling of laser-plasma wakefield accelerators in an optimal frame of reference [J.-L. Vay, Phys. Rev. Lett. 98, 130405 (2007)] that allows direct and efficient full-scale modeling of deeply depleted and beam loaded laser-plasma stages of 10 GeV-1 TeV (parameters not computationally accessible otherwise). This verifies the scaling of plasmaaccelerators to very high energies and accurately models the laser evolution and the accelerated electron beam transverse dynamics and energy spread. Over 4, 5, and 6 orders of magnitude speedup is achieved for the modeling of 10 GeV, 100 GeV, and 1 TeV class stages, respectively. Agreement at the percentage level is demonstrated between simulations using different frames of reference for a 0.1 GeV class stage. In addition, obtaining these speedups and levels of accuracy was permitted by solutions for handling data input (in particular, particle and laser beams injection) and output in a relativistically boosted frame of reference, as well as mitigation of a high-frequency instability that otherwise limits effectiveness.
Rider, William; Kamm, J. R.; Tomkins, C. D.; Zoldi, C. A.; Prestridge, K. P.; Marr-Lyon, M.; Rightley, P. M.; Benjamin, R. F.
2002-01-01
We consider the detailed structures of mixing flows for Richtmyer-Meshkov experiments of Prestridge et al. [PRE 00] and Tomkins et al. [TOM 01] and examine the most recent measurements from the experimental apparatus. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods. We compare experimental data with simulations for configurations of one and two diffuse cylinders of SF{sub 6} in air using integral measures as well as fractal analysis and continuous wavelet transforms. The details of the initial conditions have a significant effect on the computed results, especially in the case of the double cylinder. Additionally, these comparisons reveal sensitive dependence of the computed solution on the numerical method.
NASA Astrophysics Data System (ADS)
Bayati, Basil; Owhadi, Houman; Koumoutsakos, Petros
2010-12-01
We present a simple algorithm for the simulation of stiff, discrete-space, continuous-time Markov processes. The algorithm is based on the concept of flow averaging for the integration of stiff ordinary and stochastic differential equations and ultimately leads to a straightforward variation of the the well-known stochastic simulation algorithm (SSA). The speedup that can be achieved by the present algorithm [flow averaging integrator SSA (FLAVOR-SSA)] over the classical SSA comes naturally at the expense of its accuracy. The error of the proposed method exhibits a cutoff phenomenon as a function of its speed-up, allowing for optimal tuning. Two numerical examples from chemical kinetics are provided to illustrate the efficiency of the method.
Bayati, Basil; Owhadi, Houman; Koumoutsakos, Petros
2010-12-28
We present a simple algorithm for the simulation of stiff, discrete-space, continuous-time Markov processes. The algorithm is based on the concept of flow averaging for the integration of stiff ordinary and stochastic differential equations and ultimately leads to a straightforward variation of the the well-known stochastic simulation algorithm (SSA). The speedup that can be achieved by the present algorithm [flow averaging integrator SSA (FLAVOR-SSA)] over the classical SSA comes naturally at the expense of its accuracy. The error of the proposed method exhibits a cutoff phenomenon as a function of its speed-up, allowing for optimal tuning. Two numerical examples from chemical kinetics are provided to illustrate the efficiency of the method.
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.
NASA Technical Reports Server (NTRS)
Krauss-Varban, D.; Burgess, D.; Wu, C. S.
1989-01-01
Under certain conditions electrons can be reflected and effectively energized at quasi-perpendicular shocks. This process is most prominent close to the point where the upstream magnetic field is tangent to the curved shock. A theoretical explanation of the underlying physical mechanism has been proposed which assumes conservation of magnetic moment and a static, simplified shock profile are performed. Test particle calculations of the electron reflection process in order to examine the results of the theoretical analysis without imposing these restrictive conditions. A one-dimensional hybrid simulation code generates the characteristic field variations across the shock. Special emphasis is placed on the spatial and temporal length scales involved in the mirroring process. The simulation results agree generally well with the predictions from adiabatic theory. The effects of the cross-shock potential and unsteadiness are quantified, and the influence of field fluctuations on the reflection process is discussed.
Commissioning of a medical accelerator photon beam Monte Carlo simulation using wide-field profiles
NASA Astrophysics Data System (ADS)
Pena, J.; Franco, L.; Gómez, F.; Iglesias, A.; Lobato, R.; Mosquera, J.; Pazos, A.; Pardo, J.; Pombar, M.; Rodríguez, A.; Sendón, J.
2004-11-01
A method for commissioning an EGSnrc Monte Carlo simulation of medical linac photon beams through wide-field lateral profiles at moderate depth in a water phantom is presented. Although depth-dose profiles are commonly used for nominal energy determination, our study shows that they are quite insensitive to energy changes below 0.3 MeV (0.6 MeV) for a 6 MV (15 MV) photon beam. Also, the depth-dose profile dependence on beam radius adds an additional uncertainty in their use for tuning nominal energy. Simulated 40 cm × 40 cm lateral profiles at 5 cm depth in a water phantom show greater sensitivity to both nominal energy and radius. Beam parameters could be determined by comparing only these curves with measured data.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Xu, Yanping; Randers-Pehrson, Gerhard; Turner, Helen C.; Marino, Stephen A.; Geard, Charles R.; Brenner, David J.; Garty, Guy
2015-01-01
We describe here an accelerator-based neutron irradiation facility, intended to expose blood or small animals to neutron fields mimicking those from an improvised nuclear device at relevant distances from the epicenter. Neutrons are generated by a mixed proton/deuteron beam on a thick beryllium target, generating a broad spectrum of neutron energies that match those estimated for the Hiroshima bomb at 1.5 km from ground zero. This spectrum, dominated by neutron energies between 0.2 and 9 MeV, is significantly different from the standard reactor fission spectrum, as the initial bomb spectrum changes when the neutrons are transported through air. The neutron and gamma dose rates were measured using a custom tissue-equivalent gas ionization chamber and a compensated Geiger-Mueller dosimeter, respectively. Neutron spectra were evaluated by unfolding measurements using a proton-recoil proportional counter and a liquid scintillator detector. As an illustration of the potential use of this facility we present micronucleus yields in single divided, cytokinesis-blocked human peripheral lymphocytes up to 1.5 Gy demonstrating 3- to 5-fold enhancement over equivalent X-ray doses. This facility is currently in routine use, irradiating both mice and human blood samples for evaluation of neutron-specific biodosimetry assays. Future studies will focus on dose reconstruction in realistic mixed neutron/photon fields. PMID:26414507
NASA Astrophysics Data System (ADS)
Colberg, Peter H.; Höfling, Felix
2011-05-01
Modern graphics processing units (GPUs) provide impressive computing resources, which can be accessed conveniently through the CUDA programming interface. We describe how GPUs can be used to considerably speed up molecular dynamics (MD) simulations for system sizes ranging up to about 1 million particles. Particular emphasis is put on the numerical long-time stability in terms of energy and momentum conservation, and caveats on limited floating-point precision are issued. Strict energy conservation over 10 8 MD steps is obtained by double-single emulation of the floating-point arithmetic in accuracy-critical parts of the algorithm. For the slow dynamics of a supercooled binary Lennard-Jones mixture, we demonstrate that the use of single-floating point precision may result in quantitatively and even physically wrong results. For simulations of a Lennard-Jones fluid, the described implementation shows speedup factors of up to 80 compared to a serial implementation for the CPU, and a single GPU was found to compare with a parallelised MD simulation using 64 distributed cores.
NASA Astrophysics Data System (ADS)
Puchalska, Monika; Sihver, Lembit
2015-06-01
Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.
Puchalska, Monika; Sihver, Lembit
2015-06-21
Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.
A GPU Accelerated Discontinuous Galerkin Conservative Level Set Method for Simulating Atomization
NASA Astrophysics Data System (ADS)
Jibben, Zechariah J.
This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann, 2008) for effective use of processing power. Computation is executed in parallel utilizing both CPU and GPU architectures to make the method feasible at high order. Finally, a sparse data structure is implemented to take full advantage of parallelism on the GPU, where performance relies on well-managed memory operations. With solution variables projected into a kth order polynomial basis, a k + 1 order convergence rate is found for both advection and reinitialization tests using the method of manufactured solutions. Other standard test cases, such as Zalesak's disk and deformation of columns and spheres in periodic vortices are also performed, showing several orders of magnitude improvement over traditional WENO level set methods. These tests also show the impact of reinitialization, which often increases shape and volume errors as a result of level set scalar trapping by normal vectors calculated from the local level set field. Accelerating advection via GPU hardware is found to provide a 30x speedup factor comparing a 2.0GHz Intel Xeon E5-2620 CPU in serial vs. a Nvidia Tesla K20 GPU, with speedup factors increasing with polynomial degree until shared memory is filled. A similar algorithm is implemented for reinitialization, which relies on heavier use of shared and global memory and as a result fills them more quickly and produces smaller speedups of 18x.
A coupled ordinates method for solution acceleration of rarefied gas dynamics simulations
Das, Shankhadeep; Mathur, Sanjay R.; Alexeenko, Alina; Murthy, Jayathi Y.
2015-05-15
Non-equilibrium rarefied flows are frequently encountered in a wide range of applications, including atmospheric re-entry vehicles, vacuum technology, and microscale devices. Rarefied flows at the microscale can be effectively modeled using the ellipsoidal statistical Bhatnagar–Gross–Krook (ESBGK) form of the Boltzmann kinetic equation. Numerical solutions of these equations are often based on the finite volume method (FVM) in physical space and the discrete ordinates method in velocity space. However, existing solvers use a sequential solution procedure wherein the velocity distribution functions are implicitly coupled in physical space, but are solved sequentially in velocity space. This leads to explicit coupling of the distribution function values in velocity space and slows down convergence in systems with low Knudsen numbers. Furthermore, this also makes it difficult to solve multiscale problems or problems in which there is a large range of Knudsen numbers. In this paper, we extend the coupled ordinates method (COMET), previously developed to study participating radiative heat transfer, to solve the ESBGK equations. In this method, at each cell in the physical domain, distribution function values for all velocity ordinates are solved simultaneously. This coupled solution is used as a relaxation sweep in a geometric multigrid method in the spatial domain. Enhancements to COMET to account for the non-linearity of the ESBGK equations, as well as the coupled implementation of boundary conditions, are presented. The methodology works well with arbitrary convex polyhedral meshes, and is shown to give significantly faster solutions than the conventional sequential solution procedure. Acceleration factors of 5–9 are obtained for low to moderate Knudsen numbers on single processor platforms.
Zhmurov, A; Dima, R I; Kholodov, Y; Barsegov, V
2010-11-01
Theoretical exploration of fundamental biological processes involving the forced unraveling of multimeric proteins, the sliding motion in protein fibers and the mechanical deformation of biomolecular assemblies under physiological force loads is challenging even for distributed computing systems. Using a C(α)-based coarse-grained self organized polymer (SOP) model, we implemented the Langevin simulations of proteins on graphics processing units (SOP-GPU program). We assessed the computational performance of an end-to-end application of the program, where all the steps of the algorithm are running on a GPU, by profiling the simulation time and memory usage for a number of test systems. The ∼90-fold computational speedup on a GPU, compared with an optimized central processing unit program, enabled us to follow the dynamics in the centisecond timescale, and to obtain the force-extension profiles using experimental pulling speeds (v(f) = 1-10 μm/s) employed in atomic force microscopy and in optical tweezers-based dynamic force spectroscopy. We found that the mechanical molecular response critically depends on the conditions of force application and that the kinetics and pathways for unfolding change drastically even upon a modest 10-fold increase in v(f). This implies that, to resolve accurately the free energy landscape and to relate the results of single-molecule experiments in vitro and in silico, molecular simulations should be carried out under the experimentally relevant force loads. This can be accomplished in reasonable wall-clock time for biomolecules of size as large as 10(5) residues using the SOP-GPU package.
Wang, Haipeng; Plawski, Tomasz E.; Rimmer, Robert A.
2015-09-01
As a drop-in replacement for the CEBAF CW klystron system, a 1497 MHz, CW-type high-efficiency magnetron using injection phase lock and amplitude variation is attractive. Amplitude control using magnetic field trimming and anode voltage modulation has been studied using analytical models and MATLAB/Simulink simulations. Since the 1497 MHz magnetron has not been built yet, previously measured characteristics of a 2.45GHz cooker magnetron are used as reference. The results of linear responses to the amplitude and phase control of a superconducting RF (SRF) cavity, and the expected overall benefit for the current CEBAF and future MEIC RF systems are presented in this paper.
Accelerated simulations of aromatic polymers: application to polyether ether ketone (PEEK)
NASA Astrophysics Data System (ADS)
Broadbent, Richard J.; Spencer, James S.; Mostofi, Arash A.; Sutton, Adrian P.
2014-10-01
For aromatic polymers, the out-of-plane oscillations of aromatic groups limit the maximum accessible time step in a molecular dynamics simulation. We present a systematic approach to removing such high-frequency oscillations from planar groups along aromatic polymer backbones, while preserving the dynamical properties of the system. We consider, as an example, the industrially important polymer, polyether ether ketone (PEEK), and show that this coarse graining technique maintains excellent agreement with the fully flexible all-atom and all-atom rigid bond models whilst allowing the time step to increase fivefold to 5 fs.
NASA Astrophysics Data System (ADS)
Gawad, J.; Khairullah, Md; Roose, D.; Van Bael, A.
2016-08-01
Multi-scale simulations are computationally expensive if a two-way coupling is employed. In the context of sheet metal forming simulations, a fine-scale representative volume element (RVE) crystal plasticity (CP) model would supply the Finite Element analysis with plastic properties, taking into account the evolution of crystallographic texture and other microstructural features. The main bottleneck is that the fine-scale model must be evaluated at virtually every integration point in the macroscopic FE mesh. We propose to address this issue by exploiting a verifiable assumption that fine-scale state variables of similar RVEs, as well as the derived properties, subjected to similar macroscopic boundary conditions evolve along nearly identical trajectories. Furthermore, the macroscopic field variables primarily responsible for the evolution of fine-scale state variables often feature local quasi-homogeneities. Adjacent integration points in the FE mesh can be then clustered together in the regions where the field responsible for the evolution shows low variance. This way the fine-scale evolution is tracked only at a limited number of material points and the derived plastic properties are propagated to the surrounding integration points subjected to similar deformation. Optimal configurations of the clusters vary in time as the local deformation conditions may change during the forming process, so the clusters must be periodically adapted. We consider two operations on the clusters of integration points: splitting (refinement) and merging (unrefinement). The concept is tested in the Hierarchical Multi-Scale (HMS) framework [1] that computes macroscopic deformations by means of the FEM, whereas the micro-structural evolution at the individual FE integration points is predicted by a CP model. The HMS locally and adaptively approximates homogenized stress responses of the CP model by means of analytical plastic potential or yield criterion function. Our earlier work
Dryga, Anatoly; Warshel, Arieh
2010-01-01
Simulations of long time process in condensed phases in general and in biomolecules in particular, presents a major challenge that cannot be overcome at present by brute force molecular dynamics (MD) approaches. This work takes the renormalization method, intruded by us sometime ago, and establishes its reliability and potential in extending the time scale of molecular simulations. The validation involves a truncated gramicidin system in the gas phase that is small enough to allow very long explicit simulation and sufficiently complex to present the physics of realistic ion channels. The renormalization approach is found to be reliable and arguably presents the first approach that allows one to exploit the otherwise problematic steered molecular dynamics (SMD) treatments in quantitative and meaningful studies. It is established that we can reproduce the long time behavior of large systems by using Langevin dynamics (LD) simulations of a renormalized implicit model. This is done without spending the enormous time needed to obtain such trajectories in the explicit system. The present study also provides a promising advance in accelerated evaluation of free energy barriers. This is done by adjusting the effective potential in the implicit model to reproduce the same passage time as that obtained in the explicit model, under the influence of an external force. Here having a reasonable effective friction provides a way to extract the potential of mean force (PMF) without investing the time needed for regular PMF calculations. The renormalization approach, which is illustrated here in realistic calculations, is expected to provide a major help in studies of complex landscapes and in exploring long time dynamics of biomolecules. PMID:20836533
Accelerating Ab Initio Path Integral Simulations via Imaginary Multiple-Timestepping.
Cheng, Xiaolu; Herr, Jonathan D; Steele, Ryan P
2016-04-12
This work investigates the use of multiple-timestep schemes in imaginary time for computationally efficient ab initio equilibrium path integral simulations of quantum molecular motion. In the simplest formulation, only every n(th) path integral replica is computed at the target level of electronic structure theory, whereas the remaining low-level replicas still account for nuclear motion quantum effects with a more computationally economical theory. Motivated by recent developments for multiple-timestep techniques in real-time classical molecular dynamics, both 1-electron (atomic-orbital basis set) and 2-electron (electron correlation) truncations are shown to be effective. Structural distributions and thermodynamic averages are tested for representative analytic potentials and ab initio molecular examples. Target quantum chemistry methods include density functional theory and second-order Møller-Plesset perturbation theory, although any level of theory is formally amenable to this framework. For a standard two-level splitting, computational speedups of 1.6-4.0x are observed when using a 4-fold reduction in time slices; an 8-fold reduction is feasible in some cases. Multitiered options further reduce computational requirements and suggest that quantum mechanical motion could potentially be obtained at a cost not significantly different from the cost of classical simulations.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Budiardja, R. D.; Cardall, Christian Y; Endeve, Eirik
2015-01-01
Core-collapse supernovae are among the most powerful explosions in the Universe, releasing about 1053 erg of energy on timescales of a few tens of seconds. These explosion events are also responsible for the production and dissemination of most of the heavy elements, making life as we know it possible. Yet exactly how they work is still unresolved. One reason for this is the sheer complexity and cost of a self-consistent, multi-physics, and multi-dimensional core-collapse supernova simulation, which is impractical, and often impossible, even on the largest supercomputers we have available today. To advance our understanding we instead must often use simplified models, teasing out the most important ingredients for successful explosions, while helping us to interpret results from higher fidelity multi-physics models. In this paper we investigate the role of instabilities in the core-collapse supernova environment. We present here simulation and visualization results produced by our code GenASiS.
Simulating Rectified Motion of a Piston in a Housing Subjected to Vibrational Acceleration
NASA Astrophysics Data System (ADS)
Clausen, Jonthan; Torczynski, John; Romero, Louis; O'Hern, Timothy
2014-11-01
We employ ALE finite element simulations to investigate the behavior of a piston in a housing subjected to vertical vibrations. The housing is filled with a viscous liquid to damp the piston motion and has bellows at both ends to represent air bubbles present in real systems. The piston has a roughly cylindrical hole along its axis, and a post attached to the housing penetrates partway into this hole. Protrusions from the hole and the post form a gap with a length that varies as the piston moves and forces liquid through this gap. Under certain conditions, nonlinearities in the system can drive the piston to move downward and compress the spring that holds it up against gravity. This behavior is investigated using ALE finite element simulations, and these results are compared with theoretical predictions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Karniadakis, George Em
2014-11-01
We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been
A scalable messaging system for accelerating discovery from large scale scientific simulations
Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan
2012-01-01
Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a
The Reception Learning Paradigm.
ERIC Educational Resources Information Center
Novak, Joseph D.
This report suggests that research in education, as well as the design of instruction, can be importantly influenced by the paradigm that guides the work. The application of a paradigm to educational research is illustrated, and two paradigms (reception learning and discovery learning) are contrasted. Finally, it is suggested that all educational…
NASA Astrophysics Data System (ADS)
Spellings, Matthew; Marson, Ryan L.; Anderson, Joshua A.; Glotzer, Sharon C.
2017-04-01
Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks-Chandler-Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method is a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.
NASA Technical Reports Server (NTRS)
Kessel, R. L.; Armstrong, T. P.; Nuber, R.; Bandle, J.
1985-01-01
Data were examined from two experiments aboard the Explorer 50 (IMP 8) spacecraft. The Johns Hopkins University/Applied Lab Charged Particle Measurement Experiment (CPME) provides 10.12 second resolution ion and electron count rates as well as 5.5 minute or longer averages of the same, with data sampled in the ecliptic plane. The high time resolution of the data allows for an explicit, point by point, merging of the magnetic field and particle data and thus a close examination of the pre- and post-shock conditions and particle fluxes associated with large angle oblique shocks in the interplanetary field. A computer simulation has been developed wherein sample particle trajectories, taken from observed fluxes, are allowed to interact with a planar shock either forward or backward in time. One event, the 1974 Day 312 shock, is examined in detail.
Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.
Nagaoka, Tomoaki; Watanabe, Soichi
2011-01-01
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
XaNSoNS: GPU-accelerated simulator of diffraction patterns of nanoparticles
NASA Astrophysics Data System (ADS)
Neverov, V. S.
XaNSoNS is an open source software with GPU support, which simulates X-ray and neutron 1D (or 2D) diffraction patterns and pair-distribution functions (PDF) for amorphous or crystalline nanoparticles (up to ∼107 atoms) of heterogeneous structural content. Among the multiple parameters of the structure the user may specify atomic displacements, site occupancies, molecular displacements and molecular rotations. The software uses general equations nonspecific to crystalline structures to calculate the scattering intensity. It supports four major standards of parallel computing: MPI, OpenMP, Nvidia CUDA and OpenCL, enabling it to run on various architectures, from CPU-based HPCs to consumer-level GPUs.
Quasi-spherical direct drive fusion simulations for the Z machine and future accelerators.
VanDevender, J. Pace; McDaniel, Dillon Heirman; Roderick, Norman Frederick; Nash, Thomas J.
2007-11-01
We explored the potential of Quasi-Spherical Direct Drive (QSDD) to reduce the cost and risk of a future fusion driver for Inertial Confinement Fusion (ICF) and to produce megajoule thermonuclear yield on the renovated Z Machine with a pulse shortening Magnetically Insulated Current Amplifier (MICA). Analytic relationships for constant implosion velocity and constant pusher stability have been derived and show that the required current scales as the implosion time. Therefore, a MICA is necessary to drive QSDD capsules with hot-spot ignition on Z. We have optimized the LASNEX parameters for QSDD with realistic walls and mitigated many of the risks. Although the mix-degraded 1D yield is computed to be {approx}30 MJ on Z, unmitigated wall expansion under the > 100 gigabar pressure just before burn prevents ignition in the 2D simulations. A squeezer system of adjacent implosions may mitigate the wall expansion and permit the plasma to burn.
An object-oriented, coprocessor-accelerated model for ice sheet simulations
NASA Astrophysics Data System (ADS)
Seddik, H.; Greve, R.
2013-12-01
Recently, numerous models capable of modeling the thermo-dynamics of ice sheets have been developed within the ice sheet modeling community. Their capabilities have been characterized by a wide range of features with different numerical methods (finite difference or finite element), different implementations of the ice flow mechanics (shallow-ice, higher-order, full Stokes) and different treatments for the basal and coastal areas (basal hydrology, basal sliding, ice shelves). Shallow-ice models (SICOPOLIS, IcIES, PISM, etc) have been widely used for modeling whole ice sheets (Greenland and Antarctica) due to the relatively low computational cost of the shallow-ice approximation but higher order (ISSM, AIF) and full Stokes (Elmer/Ice) models have been recently used to model the Greenland ice sheet. The advance in processor speed and the decrease in cost for accessing large amount of memory and storage have undoubtedly been the driving force in the commoditization of models with higher capabilities, and the popularity of Elmer/Ice (http://elmerice.elmerfem.com) with an active user base is a notable representation of this trend. Elmer/Ice is a full Stokes model built on top of the multi-physics package Elmer (http://www.csc.fi/english/pages/elmer) which provides the full machinery for the complex finite element procedure and is fully parallel (mesh partitioning with OpenMPI communication). Elmer is mainly written in Fortran 90 and targets essentially traditional processors as the code base was not initially written to run on modern coprocessors (yet adding support for the recently introduced x86 based coprocessors is possible). Furthermore, a truly modular and object-oriented implementation is required for quick adaptation to fast evolving capabilities in hardware (Fortran 2003 provides an object-oriented programming model while not being clean and requiring a tricky refactoring of Elmer code). In this work, the object-oriented, coprocessor-accelerated finite element
The GENGA Code: Gravitational Encounters in N-body simulations with GPU Acceleration.
NASA Astrophysics Data System (ADS)
Grimm, Simon; Stadel, Joachim
2013-07-01
We present a GPU (Graphics Processing Unit) implementation of a hybrid symplectic N-body integrator based on the Mercury Code (Chambers 1999), which handles close encounters with a very good energy conservation. It uses a combination of a mixed variable integration (Wisdom & Holman 1991) and a direct N-body Bulirsch-Stoer method. GENGA is written in CUDA C and runs on NVidia GPU's. The GENGA code supports three simulation modes: Integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. To achieve the best performance, GENGA runs completely on the GPU, where it can take advantage of the very fast, but limited, memory that exists there. All operations are performed in parallel, including the close encounter detection and grouping independent close encounter pairs. Compared to Mercury, GENGA runs up to 30 times faster. Two applications of GENGA are presented: First, the dynamics of planetesimals and the late stage of rocky planet formation due to planetesimal collisions. Second, a dynamical stability analysis of an exoplanetary system with an additional hypothetical super earth, which shows that in some multiple planetary systems, additional super earths could exist without perturbing the dynamical stability of the other planets (Elser et al. 2013).
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
Kim, Yoon Sang; Khazaei, Zeinab; Ko, Junho; Afarideh, Hossein; Ghergherehchi, Mitra
2016-04-07
At present, the bremsstrahlung photon beams produced by linear accelerators are the most commonly employed method of radiotherapy for tumor treatments. A photoneutron source based on three different energies (6, 10 and 15 MeV) of a linac electron beam was designed by means of Geant4 and Monte Carlo N-Particle eXtended (MCNPX) simulation codes. To obtain maximum neutron yield, two arrangements for the photo neutron convertor were studied: (a) without a collimator, and (b) placement of the convertor after the collimator. The maximum photon intensities in tungsten were 0.73, 1.24 and 2.07 photon/e at 6, 10 and 15 MeV, respectively. There was no considerable increase in the photon fluence spectra from 6 to 15 MeV at the optimum thickness between 0.8 mm and 2 mm of tungsten. The optimum dimensions of the collimator were determined to be a length of 140 mm with an aperture of 5 mm × 70 mm for iron in a slit shape. According to the neutron yield, the best thickness obtained for the studied materials was 30 mm. The number of neutrons generated in BeO achieved the maximum value at 6 MeV, unlike that in Be, where the highest number of neutrons was observed at 15 MeV. Statistical uncertainty in all simulations was less than 0.3% and 0.05% for MCNPX and the standard electromagnetic (EM) physics packages of Geant4, respectively. Differences among spectra in various regions are due to various cross-section and stopping power data and different simulations of the physics processes.
2005-11-01
process remains an enigma that is the subject of constant study, analysis, and debate. The new operational paradigm that is promulgating throughout the...Summaries WWI World War I WWII World War II WWMCCS Worldwide Military Command And Control System XM30 LCPK Missile Low-Cost Precision Kill XM984
Particle Acceleration At Small-Scale Flux Ropes In The Heliosphere
NASA Astrophysics Data System (ADS)
Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Li, G.; Webb, G. M.; Khabarova, O.; Cummings, A. C.; Stone, E. C.; Decker, R. B.
2015-12-01
An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands or flux roped. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We discuss the basic physics of particle acceleration by single magnetic islands and describe how to incorporate these ideas in a distributed "sea of magnetic islands". We describe briefly some observations, selected simulations, and then introduce a transport approach for describing particle acceleration at small-scale flux ropes. We discuss particle acceleration in the supersonic solar wind and extend these ideas to particle acceleration at shock waves. These models are appropriate to the acceleration of both electrons and ions. We describe model predictions and supporting observations.
Employee discipline: a changing paradigm.
Raper, J L; Myaya, S N
1993-12-01
To increase the receptiveness of health care supervisors to a broader meaning of discipline and to simulate investigation of nontraditional methods of encouragement to employees who fail to meet minimum standards of conduct and thereby negatively affect the quality of patient care, a subjectively realistic view of the implications of the traditional punitive disciplinary paradigm is presented. Through the use of a case study, the authors present, explain, and apply the contemporary concept of discipline without punishment as first described by J. Huberman.
Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K
2005-01-01
In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees.
NASA Astrophysics Data System (ADS)
Johnston, Bobby; Miskimen, Rory; Downing, Matthew; Haughwout, Christian; Schick, Andrew; Jefferson Lab Hall D Collaboration
2016-09-01
The Thomas Jefferson National Accelerator Facility has proposed to make a precision measurement of the charged pion polarizability through measurements of γγ ->π+π- cross sections using the new GlueX detector. This experiment will have a large muon background which must be filtered out of the pion signal. For this issue we are developing an array of Multi-Wire Proportional Chambers (MWPCs) that will allow the pions to be identified from the muons, permitting a precise measurement of the polarizability. Small (1:8 scale) and medium (1:5 scale) sized prototypes have been constructed and tested, and a full scale prototype is currently being assembled. MWPC electronics were developed and tested to amplify the signal from the detection chamber, and were designed to interface with Jefferson Lab's existing data acquisition system. In order to construct the detectors, a class 10,000 clean room was assembled specifically for this purpose. Lastly, Geant4 software is being used to run Monte Carlo simulations of the experiment. This allows us to determine the optimal orientation and number of MWPCs needed for proper filtering which will indicate how many more MWPCs must be built before the experiment can be run. Department of Energy.
Linking metacommunity paradigms to spatial coexistence mechanisms.
Shoemaker, Lauren G; Melbourne, Brett A
2016-09-01
Four metacommunity paradigms-usually called neutral, species sorting, mass effects, and patch dynamics, respectively-are widely used for empirical and theoretical studies of spatial community dynamics. The paradigm framework highlights key ecological mechanisms operating in metacommunities, such as dispersal limitation, competition-colonization tradeoffs, or species equivalencies. However, differences in coexistence mechanisms between the paradigms and in situations with combined influences of multiple paradigms are not well understood. Here, we create a common model for competitive metacommunities, with unique parameterizations for each metacommunity paradigm and for scenarios with multiple paradigms operating simultaneously. We derive analytical expressions for the strength of Chesson's spatial coexistence mechanisms and quantify these for each paradigm via simulation. For our model, fitness-density covariance, a concentration effect measuring the importance of intraspecific aggregation of individuals, is the dominant coexistence mechanism in all three niche-based metacommunity paradigms. Increased dispersal between patches erodes intraspecific aggregation, leading to lower coexistence strength in the mass effects paradigm compared to species sorting. Our analysis demonstrates the potential importance of aggregation of individuals (fitness-density covariance) over co-variation in abiotic environments and competition between species (the storage effect), as fitness-density covariance can be stronger than the storage effect and is the sole stabilizing mechanism in the patch dynamics paradigm. As expected, stable coexistence does not occur in the neutral paradigm, which requires species to be equal and emphasizes the role of stochasticity. We show that stochasticity also plays an important role in niche-structured metacommunities by altering coexistence strength. We conclude that Chesson's spatial coexistence mechanisms provide a flexible framework for comparing
NASA Astrophysics Data System (ADS)
Kawata, Masaaki; Mikami, Masuhiro
A canonical molecular dynamics (MD) simulation was accelerated by using an efficient implementation of the multiple timestep integrator algorithm combined with the periodic fast multiple method (MEFMM) for both Coulombic and van der Waals interactions. Although a significant reduction in computational cost has been obtained previously by using the integrated method, in which the MEFMM was used only to calculate Coulombic interactions (Kawata, M., and Mikami, M., 2000, J. Comput. Chem., in press), the extension of this method to include van der Waals interactions yielded further acceleration of the overall MD calculation by a factor of about two. Compared with conventional methods, such as the velocity-Verlet algorithm combined with the Ewald method (timestep of 0.25fs), the speedup by using the extended integrated method amounted to a factor of 500 for a 100 ps simulation. Therefore, the extended method reduces substantially the computational effort of large scale MD simulations.
Gao, F.; Gai, W.; Power, J. G.; Kim, K. J.; Sun, Y. E.; Piot, P.; Rihaoui, M.; High Energy Physics; Northern Illinois Univ.; FNAL
2009-01-01
Transverse-to-longitudinal emittance exchange has promising applications in various advanced acceleration and light source concepts. A proof-of-principle experiment to demonstrate this phase space manipulation method is currently being planned at the Argonne Wakefield Accelerator. The experiment focuses on exchanging a low longitudinal emittance with a high transverse horizontal emittance and also incorporates room for possible parametric studies e.g. using an incoming flat beam with tunable horizontal emittance. In this paper, we present realistic start-to-end beam dynamics simulation of the scheme, explore the limitations of this phase space exchange.
Rihaoui, M.; Gai, W.; Kim, K.-J.; Power, J. G.; Piot, P.; Sun, Y.-E.
2009-01-22
Transverse-to-longitudinal emittance exchange has promising applications in various advanced acceleration and light source concepts. A proof-of-principle experiment to demonstrate this phase space manipulation method is currently being planned at the Argonne Wakefield Accelerator. The experiment focuses on exchanging a low longitudinal emittance with a high transverse horizontal emittance and also incorporates room for possible parametric studies e.g. using an incoming flat beam with tunable horizontal emittance. In this paper, we present realistic start-to-end beam dynamics simulation of the scheme, explore the limitations of this phase space exchange.
Rihaoui, M.; Gai, W.; Kim, K.J.; Piot, Philippe; Power, John Gorham; Sun, Y.E.; /Fermilab
2009-01-01
Transverse-to-longitudinal emittance exchange has promising applications in various advanced acceleration and light source concepts. A proof-of-principle experiment to demonstrate this phase space manipulation method is currently being planned at the Argonne Wakefield Accelerator. The experiment focuses on exchanging a low longitudinal emittance with a high transverse horizontal emittance and also incorporates room for possible parametric studies e.g. using an incoming flat beam with tunable horizontal emittance. In this paper, we present realistic start-to-end beam dynamics simulation of the scheme, explore the limitations of this phase space exchange.
Dark Energy: Anatomy of a Paradigm Shift in Cosmology
NASA Astrophysics Data System (ADS)
Hocutt, Hannah
2016-03-01
Science is defined by its ability to shift its paradigm on the basis of observation and data. Throughout history, the worldviews of the scientific community have been drastically changed to fit that which was scientifically determined to be fact. One of the latest paradigm shifts happened over the shape and fate of the universe. This research details the progression from the early paradigm of a decelerating expanding universe to the discovery of dark energy and the movement to the current paradigm of a universe that is not only expanding but is also accelerating. Advisor: Dr. Kristine Larsen.
ERIC Educational Resources Information Center
Hammack, Phillip L.
2005-01-01
Through the application of life course theory to the study of sexual orientation, this paper specifies a new paradigm for research on human sexual orientation that seeks to reconcile divisions among biological, social science, and humanistic paradigms. Recognizing the historical, social, and cultural relativity of human development, this paradigm…
Organizational Paradigm Shifts.
ERIC Educational Resources Information Center
National Association of College and University Business Officers, Washington, DC.
This collection of essays explores a new paradigm of higher education. The first essay, "Beyond Re-engineering: Changing the Organizational Paradigm" (L. Edwin Coate), suggests a model of quality process management and a structure for managing organizational change. "Thinking About Consortia" (Mary Jo Maydew) discusses…
ERIC Educational Resources Information Center
Perna, Mark C.
2005-01-01
Is marketing an expense or an investment? Most accountants will claim that marketing is an expense, and clearly that seems true when cutting the checks to fund these efforts. When it is done properly, marketing is the best investment. A key principle to Smart Marketing is the Investment Paradigm. The Investment Paradigm is understanding that every…
ERIC Educational Resources Information Center
Loynes, Chris
2002-01-01
The "algorithmic" model of outdoor experiential learning is based in military tradition and characterized by questionable scientific rationale, production line metaphor, and the notion of learning as marketable commodity. Alternatives are the moral paradigm; the ecological paradigm "friluftsliv"; and the emerging…
Ferrand, Gilles; Safi-Harb, Samar; Decourchelle, Anne E-mail: samar@physics.umanitoba.ca
2014-07-01
Supernova remnants are believed to be major contributors to Galactic cosmic rays. In this paper, we explore how the non-thermal emission from young remnants can be used to probe the production of energetic particles at the shock (both protons and electrons). Our model couples hydrodynamic simulations of a supernova remnant with a kinetic treatment of particle acceleration. We include two important back-reaction loops upstream of the shock: energetic particles can (1) modify the flow structure and (2) amplify the magnetic field. As the latter process is not fully understood, we use different limit cases that encompass a wide range of possibilities. We follow the history of the shock dynamics and of the particle transport downstream of the shock, which allows us to compute the non-thermal emission from the remnant at any given age. We do this in three dimensions, in order to generate projected maps that can be compared with observations. We observe that completely different recipes for the magnetic field can lead to similar modifications of the shock structure, although to very different configurations of the field and particles. We show how this affects the emission patterns in different energy bands, from radio to X-rays and γ-rays. High magnetic fields (>100 μG) directly impact the synchrotron emission from electrons, by restricting their emission to thin rims, and indirectly impact the inverse Compton emission from electrons and also the pion decay emission from protons, mostly by shifting their cut-off energies to respectively lower and higher energies.
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Hwang, Byungjin; Bang, Duhee
2016-01-01
All synthetic DNA materials require prior programming of the building blocks of the oligonucleotide sequences. The development of a programmable microarray platform provides cost-effective and time-efficient solutions in the field of data storage using DNA. However, the scalability of the synthesis is not on par with the accelerating sequencing capacity. Here, we report on a new paradigm of generating genetic material (writing) using a degenerate oligonucleotide and optomechanical retrieval method that leverages sequencing (reading) throughput to generate the desired number of oligonucleotides. As a proof of concept, we demonstrate the feasibility of our concept in digital information storage in DNA. In simulation, the ability to store data is expected to exponentially increase with increase in degenerate space. The present study highlights the major framework change in conventional DNA writing paradigm as a sequencer itself can become a potential source of making genetic materials. PMID:27876825
NASA Astrophysics Data System (ADS)
Catalano, M.; Agosteo, S.; Moretti, R.; Andreoli, S.
2007-06-01
The principle of optimisation of the EURATOM 97/43 directive foresees that for all medical exposure of individuals for radiotherapeutic purposes, exposures of target volumes shall be individually planned, taking into account that doses of non-target volumes and tissues shall be as low as reasonably achievable and consistent with the intended radiotherapeutic purpose of the exposure. Treatment optimisation has to be carried out especially in non conventional radiotherapic procedures, as Intra Operative Radiation Therapy (IORT) with mobile dedicated LINear ACcelerator (LINAC), which does not make use of a Treatment Planning System. IORT is carried out with electron beams and refers to the application of radiation during a surgical intervention, after the removal of a neoplastic mass and it can also be used as a one-time/stand alone treatment in initial cancer of small volume. IORT foresees a single session and a single beam only; therefore it is necessary to use protection systems (disks) temporary positioned between the target volume and the underlying tissues, along the beam axis. A single high Z shielding disk is used to stop the electrons of the beam at a certain depth and protect the tissues located below. Electron back scatter produces an enhancement in the dose above the disk, and this can be reduced if a second low Z disk is placed above the first. Therefore two protection disks are used in clinical application. On the other hand the dose enhancement at the interface of the high Z disk and the target, due to back scattering radiation, can be usefully used to improve the uniformity in treatment of thicker target volumes. Furthermore the dose above the disks of different Z material has to be evaluated in order to study the optimal combination of shielding disks that allow both to protect the underlying tissues and to obtain the most uniform dose distribution in target volumes of different thicknesses. The dose enhancement can be evaluated using the electron
Landgren, D.
1995-05-01
Paul Meagher made a big mistake when he asked me about my speech. I asked him what I should talk about. He reiterated the title of the conference {open_quotes}Forecasting and DSM: Organizing for Success,{close_quotes} and said that whatever issues I wanted to cover were fine with him. As a result I will cover those areas I`ve been thinking about recently. It is hard for me to extract either Forecasting or Demand-Side Management out from the broader issues unwinding in the industry today. I`ve been around long enough to be involved in two major shifts in the industry. I call these paradigm shifts because as a planner I tend to build models in my mind to represent business or regulatory structure. Since a paradigm is defined as a clear model of something, I tend to talk about structural shifts in the industry as paradigm shifts. The first paradigm shift was brought about by the rapid escalation of energy prices in the 1970s. The second paradigm shift, brought about in part because of the first and because of growing concerns about the environment, ushered in the era of utility conservation and load management programs (components of a broader DSM concept - unfortunately today many people limit DSM to only these two pieces). The third paradigm shift is just starting, driven by partial deregulation and the subsequent increase in competition. My talk today will focus on issues related to the second paradigm, particularly in terms of utility planners getting more organized to deal with the synergies in the fields of forecasting, demand-side planning, and evaluation. I will also reflect on two new issues within the existing paradigm that influence these functional areas, namely beneficial electrification and integration of DSM into T&D planning. Finally I will talk about what I see coming as we go through another paradigm shift, particularly as it impacts forecasting and DSM.
Paradigms for machine learning
NASA Technical Reports Server (NTRS)
Schlimmer, Jeffrey C.; Langley, Pat
1991-01-01
Five paradigms are described for machine learning: connectionist (neural network) methods, genetic algorithms and classifier systems, empirical methods for inducing rules and decision trees, analytic learning methods, and case-based approaches. Some dimensions are considered along with these paradigms vary in their approach to learning, and the basic methods are reviewed that are used within each framework, together with open research issues. It is argued that the similarities among the paradigms are more important than their differences, and that future work should attempt to bridge the existing boundaries. Finally, some recent developments in the field of machine learning are discussed, and their impact on both research and applications is examined.
Tsiklauri, D.
2012-08-15
The process of particle acceleration by left-hand, circularly polarised inertial Alfven waves (IAW) in a transversely inhomogeneous plasma is studied using 3D particle-in-cell simulation. A cylindrical tube with, transverse to the background magnetic field, inhomogeneity scale of the order of ion inertial length is considered on which IAWs with frequency 0.3{omega}{sub ci} are launched that are allowed to develop three wavelength. As a result time-varying parallel electric fields are generated in the density gradient regions which accelerate electrons in the parallel to magnetic field direction. Driven perpendicular electric field of IAWs also heats ions in the transverse direction. Such numerical setup is relevant for solar flaring loops and earth auroral zone. This first, 3D, fully kinetic simulation demonstrates electron acceleration efficiency in the density inhomogeneity regions, along the magnetic field, of the order of 45% and ion heating, in the transverse to the magnetic field direction, of 75%. The latter is a factor of two times higher than the previous 2.5D analogous study and is in accordance with solar flare particle acceleration observations. We find that the generated parallel electric field is localised in the density inhomogeneity region and rotates in the same direction and with the same angular frequency as the initially launched IAW. Our numerical simulations seem also to suggest that the 'knee' often found in the solar flare electron spectra can alternatively be interpreted as the Landau damping (Cerenkov resonance effect) of IAWs due to the wave-particle interactions.
Paley, John
2011-01-01
The fictionalist paradigm is introduced, and differentiated from other paradigms, using the Lincoln & Guba template. Following an initial overview, the axioms of fictionalism are delineated by reference to standard metaphysical categories: the nature of reality, the relationship between knower and known, the possibility of generalization, the possibility of causal linkages, and the role of values in inquiry. Although a paradigm's 'basic beliefs' are arbitrary and can be assumed for any reason, in this paper the fictionalist axioms are supported with philosophical considerations, and the key differences between fictionalism, positivism, and constructivism are briefly explained. Paradigm characteristics are then derived, focusing particularly on the methodological consequences. Towards the end of the paper, various objections and misunderstandings are discussed.
Kartavykh, Y. Y.; Dröge, W.; Gedalin, M.
2016-03-20
We use numerical solutions of the focused transport equation obtained by an implicit stochastic differential equation scheme to study the evolution of the pitch-angle dependent distribution function of protons in the vicinity of shock waves. For a planar stationary parallel shock, the effects of anisotropic distribution functions, pitch-angle dependent spatial diffusion, and first-order Fermi acceleration at the shock are examined, including the timescales on which the energy spectrum approaches the predictions of diffusive shock acceleration theory. We then consider the case that a flare-accelerated population of ions is released close to the Sun simultaneously with a traveling interplanetary shock for which we assume a simplified geometry. We investigate the consequences of adiabatic focusing in the diverging magnetic field on the particle transport at the shock, and of the competing effects of acceleration at the shock and adiabatic energy losses in the expanding solar wind. We analyze the resulting intensities, anisotropies, and energy spectra as a function of time and find that our simulations can naturally reproduce the morphologies of so-called mixed particle events in which sometimes the prompt and sometimes the shock component is more prominent, by assuming parameter values which are typically observed for scattering mean free paths of ions in the inner heliosphere and energy spectra of the flare particles which are injected simultaneously with the release of the shock.
NASA Astrophysics Data System (ADS)
Stark, D. J.; Yin, L.; Albright, B. J.; Guo, F.
2016-10-01
A PIC study of laser-ion acceleration via relativistic induced transparency points to how 2D-S (laser polarization in the simulation plane) and -P (out-of-plane) simulations may capture different physics characterizing these systems, visible in their entirety in (often cost-prohibitive) 3D simulations. The electron momentum anisotropy induced in the target by the laser pulse is dramatically different in the two 2D cases, manifesting in differences in polarization shift, electric field strength, density threshold for onset of relativistic induced transparency, and target expansion timescales. In particular, a trajectory analysis of individual electrons and ions may allow one to delineate the role of the fields and modes responsible for ion acceleration. With this information, we consider how 2D simulations might be used to develop, in some respects, a fully 3D understanding of the system. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Los Alamos National Laboratory Directed Research and Development Program.
NASA Astrophysics Data System (ADS)
Tygier, S.; Appleby, R. B.; Garland, J. M.; Hock, K.; Owen, H.; Kelliher, D. J.; Sheehy, S. L.
2015-03-01
We present PyZgoubi, a framework that has been developed based on the tracking engine Zgoubi to model, optimise and visualise the dynamics in particle accelerators, especially fixed-field alternating-gradient (FFAG) accelerators. We show that PyZgoubi abstracts Zgoubi by wrapping it in an easy-to-use Python framework in order to allow simple construction, parameterisation, visualisation and optimisation of FFAG accelerator lattices. Its object oriented design gives it the flexibility and extensibility required for current novel FFAG design. We apply PyZgoubi to two example FFAGs; this includes determining the dynamic aperture of the PAMELA medical FFAG in the presence of magnet misalignments, and illustrating how PyZgoubi may be used to optimise FFAGs. We also discuss a robust definition of dynamic aperture in an FFAG and show its implementation in PyZgoubi.
Can Accelerators Accelerate Learning?
NASA Astrophysics Data System (ADS)
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Chen, Wenduo; Zhu, Youliang; Cui, Fengchao; Liu, Lunyang; Sun, Zhaoyan; Chen, Jizhong; Li, Yunqi
2016-01-01
Gay-Berne (GB) potential is regarded as an accurate model in the simulation of anisotropic particles, especially for liquid crystal (LC) mesogens. However, its computational complexity leads to an extremely time-consuming process for large systems. Here, we developed a GPU-accelerated molecular dynamics (MD) simulation with coarse-grained GB potential implemented in GALAMOST package to investigate the LC phase transitions for mesogens in small molecules, main-chain or side-chain polymers. For identical mesogens in three different molecules, on cooling from fully isotropic melts, the small molecules form a single-domain smectic-B phase, while the main-chain LC polymers prefer a single-domain nematic phase as a result of connective restraints in neighboring mesogens. The phase transition of side-chain LC polymers undergoes a two-step process: nucleation of nematic islands and formation of multi-domain nematic texture. The particular behavior originates in the fact that the rotational orientation of the mesogenes is hindered by the polymer backbones. Both the global distribution and the local orientation of mesogens are critical for the phase transition of anisotropic particles. Furthermore, compared with the MD simulation in LAMMPS, our GPU-accelerated code is about 4 times faster than the GPU version of LAMMPS and at least 200 times faster than the CPU version of LAMMPS. This study clearly shows that GPU-accelerated MD simulation with GB potential in GALAMOST can efficiently handle systems with anisotropic particles and interactions, and accurately explore phase differences originated from molecular structures.
Cui, Fengchao; Liu, Lunyang; Sun, Zhaoyan; Chen, Jizhong; Li, Yunqi
2016-01-01
Gay-Berne (GB) potential is regarded as an accurate model in the simulation of anisotropic particles, especially for liquid crystal (LC) mesogens. However, its computational complexity leads to an extremely time-consuming process for large systems. Here, we developed a GPU-accelerated molecular dynamics (MD) simulation with coarse-grained GB potential implemented in GALAMOST package to investigate the LC phase transitions for mesogens in small molecules, main-chain or side-chain polymers. For identical mesogens in three different molecules, on cooling from fully isotropic melts, the small molecules form a single-domain smectic-B phase, while the main-chain LC polymers prefer a single-domain nematic phase as a result of connective restraints in neighboring mesogens. The phase transition of side-chain LC polymers undergoes a two-step process: nucleation of nematic islands and formation of multi-domain nematic texture. The particular behavior originates in the fact that the rotational orientation of the mesogenes is hindered by the polymer backbones. Both the global distribution and the local orientation of mesogens are critical for the phase transition of anisotropic particles. Furthermore, compared with the MD simulation in LAMMPS, our GPU-accelerated code is about 4 times faster than the GPU version of LAMMPS and at least 200 times faster than the CPU version of LAMMPS. This study clearly shows that GPU-accelerated MD simulation with GB potential in GALAMOST can efficiently handle systems with anisotropic particles and interactions, and accurately explore phase differences originated from molecular structures. PMID:26986851
Righi, Sergio; Karaj, Evis; Felici, Giuseppe; Di Martino, Fabio
2013-01-07
The Novac7 and Liac are linear accelerators (linacs) dedicated to intraoperative radiation therapy (IORT), which produce high energy, very high dose-per-pulse electron beams. The characteristics of the accelerators heads of the Novac7 and Liac are different compared to conventional electron accelerators. The aim of this work was to investigate the specific characteristics of the Novac7 and Liac electron beams using the Monte Carlo method. The Monte Carlo code BEAMnrc has been employed to model the head and simulate the electron beams. The Monte Carlo simulation was preliminarily validated by comparing the simulated dose distributions with those measured by means of EBT radiochromic film. Then, the energy spectra, mean energy profiles, fluence profiles, photon contamination, and angular distributions were obtained from the Monte Carlo simulation. The Spencer-Attix water-to-air mass restricted collision stopping power ratios (sw,air) were also calculated. Moreover, the modifications of the percentage depth dose in water (backscatter effect) due to the presence of an attenuator plate composed of a sandwich of a 2 mm aluminum foil and a 4 mm lead foil, commonly used for breast treatments, were evaluated. The calculated sw,air values are in agreement with those tabulated in the IAEA TRS-398 dosimetric code of practice within 0.2% and 0.4% at zref (reference depth in water) for the Novac7 and Liac, respectively. These differences are negligible for practical dosimetry. The attenuator plate is sufficient to completely absorb the electron beam for each energy of the Novac7 and Liac; moreover, the shape of the dose distribution in water strongly changes with the introduction of the attenuator plate. This variation depends on the energy of the beam, and it can give rise to an increase in the maximum dose in the range of 3%-9%.
Teng, L.C.
1960-01-19
ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.
Koh, Wonryull; Blackwell, Kim T.
2011-01-01
Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371
Owen, Justin
2013-12-01
Coherent electron cooling (CeC) offers a potential new method of cooling hadron beams in colliders such as the Relativistic Heavy Ion Collider (RHIC) or the future electron ion collider eRHIC. A 22 MeV linear accelerator is currently being built as part of a proof of principle experiment for CeC at Brookhaven National Laboratory (BNL). In this thesis we present a simulation of electron beam dynamics including space charge in the 22 MeV CeC proof of principle experiment using the program ASTRA (A Space charge TRacking Algorithm).
NASA Astrophysics Data System (ADS)
Hu, Zhicheng; Li, Ruo; Qiao, Zhonghua
2016-12-01
We study the acceleration of steady-state computation for microflow, which is modeled by the high-order moment models derived recently from the steady-state Boltzmann equation with BGK-type collision term. By using the lower-order model correction, a novel nonlinear multi-level moment solver is developed. Numerical examples verify that the resulting solver improves the convergence significantly thus is able to accelerate the steady-state computation greatly. The behavior of the solver is also numerically investigated. It is shown that the convergence rate increases, indicating the solver would be more efficient, as the total levels increases. Three order reduction strategies of the solver are considered. Numerical results show that the most efficient order reduction strategy would be ml-1 = ⌈ml / 2 ⌉.
NASA Astrophysics Data System (ADS)
Pavlenko, O. V.; Tubanov, Ts. A.
2017-01-01
The regularities in the radiation and propagation of seismic waves within the Baikal Rift Zone in Buryatia are studied to estimate the ground motion parameters from the probable future strong earthquakes. The regional parameters of seismic radiation and propagation are estimated by the stochastic simulation (which provides the closest agreement between the calculations and observations) of the acceleration time histories of the earthquakes recorded by the Ulan-Ude seismic station. The acceleration time histories of the strongest earthquakes ( M W 3.4-4.8) that occurred in 2006-2011 at the epicentral distances of 96-125 km and had source depths of 8-12 km have been modeled. The calculations are conducted with estimates of the Q-factor which were previously obtained for the region. The frequency-dependent attenuation and geometrical spreading are estimated from the data on the deep structure of the crust and upper mantle (velocity sections) in the Ulan-Ude region, and the parameters determining the wave forms and duration of acceleration time histories are found by fitting. These parameters fairly well describe all the considered earthquakes. The Ulan-Ude station can be considered as the reference bedrock station with minimum local effects. The obtained estimates for the parameters of seismic radiation and propagation can be used for forecasting the ground motion from the future strong earthquakes and for constructing the seismic zoning maps for Buryatia.
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
The Learning Paradigm College.
ERIC Educational Resources Information Center
Tagg, John
The author of this book argues that innovations do not transform colleges because higher education faces a problem of scale. The book identifies two paradigms: "organizational," which is the overall theory-in-use of an organization; and "instructional," which incorporates the mission of higher education institutions to provide instruction in the…
Deconstructing Research: Paradigms Lost
ERIC Educational Resources Information Center
Trifonas, Peter Pericles
2009-01-01
In recent decades, proponents of naturalistic and/or critical modes of inquiry advocating the use of ethnographic techniques for the narrative-based study of phenomena within pedagogical contexts have challenged the central methodological paradigm of educational research: that is, the tendency among its practitioners to adhere to quantitative…
ERIC Educational Resources Information Center
Wrigley, Terry
2011-01-01
This short paper points to some paradigm issues in the field of school development (leadership, effectiveness, improvement) and their relationship to social justice. It contextualises the dominant School Effectiveness and School Improvement models within neo-liberal marketisation, paying attention to their transformation through a "marriage of…
Alternative Evaluation Research Paradigm.
ERIC Educational Resources Information Center
Patton, Michael Quinn
This monograph is one of a continuing series initiated to provide materials for teachers, parents, school administrators, and governmental decision-makers that might encourage reexamination of a range of evaluation issues and perspectives about schools and schooling. This monograph is a description and analysis of two contrasting paradigms: one…
NASA Astrophysics Data System (ADS)
Mebane, Sheryl Dee
Part I. Molecular dynamics simulation of organometallic reaction dynamics. To study the interplay of solute and solvent dynamics, large-scale molecular dynamics simulations were employed. Lennard-Jones and electrostatic models of potential energies from solvent-only studies were combined with solute potentials generated from ab-initio calculations. Radial distribution functions and other measures revealed the polar solvent's response to solute dynamics following CO dissociation. In future studies, the time-scale for solvent coordination will be confirmed with ultrafast spectroscopy data. Part II. Enhancing achievement in chemistry for African American students through innovations in pedagogy aligned with supporting assessment and curriculum and integrated under an alternative research paradigm. Much progress has been made in the area of research in education that focuses on teaching and learning in science. Much effort has also centered on documenting and exploring the disparity in academic achievement between underrepresented minority students and students comprising a majority in academic circles. However, few research projects have probed educational inequities in the context of mainstream science education. In order to enrich this research area and to better reach underserved learning communities, the educational experience of African American students in an ethnically and academically diverse high school science class has been examined throughout one, largely successful, academic year. The bulk of data gathered during the study was obtained through several qualitative research methods and was interpreted using research literature that offered fresh theoretical perspectives on equity that may better support effective action.
NASA Astrophysics Data System (ADS)
Mostafaei, F.; McNeill, F. E.; Chettle, D. R.; Matysiak, W.; Bhatia, C.; Prestwich, W. V.
2015-01-01
We have tested the Monte Carlo code FLUKA for its ability to assist in the development of a better system for the in vivo measurement of fluorine. We used it to create a neutron flux map of the inside of the in vivo neutron activation analysis irradiation cavity at the McMaster Accelerator Laboratory. The cavity is used in a system that has been developed for assessment of fluorine levels in the human hand. This study was undertaken to (i) assess the FLUKA code, (ii) find the optimal hand position inside the cavity and assess the effects on precision of a hand being in a non-optimal position and (iii) to determine the best location for our γ-ray detection system within the accelerator beam hall. Simulation estimates were performed using FLUKA. Experimental measurements of the neutron flux were performed using Mn wires. The activation of the wires was measured inside (1) an empty bottle, (2) a bottle containing water, (3) a bottle covered with cadmium and (4) a dry powder-based fluorine phantom. FLUKA was used to simulate the irradiation cavity, and used to estimate the neutron flux in different positions both inside, and external to, the cavity. The experimental results were found to be consistent with the Monte Carlo simulated neutron flux. Both experiment and simulation showed that there is an optimal position in the cavity, but that the effect on the thermal flux of a hand being in a non-optimal position is less than 20%, which will result in a less than 10% effect on the measurement precision. FLUKA appears to be a code that can be useful for modeling of this type of experimental system.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby
NASA Astrophysics Data System (ADS)
Yildiz, H. Duran; Cakir, R.; Porsuk, D.
2015-06-01
Design and simulation of a superconducting gun cavity with 3½ cells have been studied in order to give the first push to the electron beam for the linear accelerating system at The Institute of Accelerator Technologies at Ankara University. Electrons are accelerated through the gun cavity with the help of the Radiofrequency power suppliers from cryogenic systems. Accelerating gradient should be as high as possible to accelerate electron beam inside the cavity. In this study, electron beam reaches to 9.17 MeV energy at the end of the gun cavity with the accelerating gradient; Ec=19.21 MV/m. 1.3 GHz gun cavity consists of three TESLA-like shaped cells while the special designed gun-cell includes a cathode plug. Optimized important beam parameters inside the gun cavity, average beam current 3 mA, transverse emittance 2.5 mm mrad, repetition rate 30 MHz and other parameters are obtained for the SASE-FEL System. The Superfish/Poisson program is used to design each cell of the superconducting cavity. Superconducting gun cavity and Radiofrequency properties are studied by utilizing 2D Superfish/Poisson, 3D Computer Simulation Technology Microwave Studio, and 3D Computer Simulation Technology Particle Studio. Superfish/Poisson is also used to optimize the geometry of the cavity cells to get the highest accelerating gradient. The behavior of the particles along the beamline is included in this study. ASTRA Code is used to track the particles.
The Nature of Paradigms and Paradigm Shifts in Music Education
ERIC Educational Resources Information Center
Panaiotidi, Elvira
2005-01-01
In this paper, the author attempts to extend the paradigm approach into the philosophy of music education and to build upon this basis a model for structuring music education discourse. The author begins with an examination of Peter Abbs' account of paradigms and paradigm shifts in arts education. Then she turns to Kuhn's conception and to his…
2013-07-09
rarefied gas flow simulations by the DSMC method,” Phys. Fluids...Symposium on Rarefied Gas Dynamics, AIP Conf. Proc. 1501, 519-526 (2012); doi: 10.1063/1.4769583 Valentini, P., Zhang, C., and Schwartzentruber, T.E...method [1], which simulates the Boltzmann equation [2] and is therefore accurate for highly nonequilibrium flows relevant to rarefied flows and
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Combined generating-accelerating buncher for compact linear accelerators
NASA Astrophysics Data System (ADS)
Savin, E. A.; Matsievskiy, S. V.; Sobenin, N. P.; Sokolov, I. D.; Zavadtsev, A. A.
2016-09-01
Described in the previous article [1] method of the power extraction from the modulated electron beam has been applied to the compact standing wave electron linear accelerator feeding system, which doesnt require any connection waveguides between the power source and the accelerator itself [2]. Generating and accelerating bunches meet in the hybrid accelerating cell operating at TM020 mode, thus the accelerating module is placed on the axis of the generating module, which consists from the pulsed high voltage electron sources and electrons dumps. This combination makes the accelerator very compact in size which is very valuable for the modern applications such as portable inspection sources. Simulations and geometry cold tests are presented.
Rutherford, Helena J V; Goldberg, Benjamin; Luyten, Patrick; Bridgett, David J; Mayes, Linda C
2013-12-01
Parental reflective functioning represents the capacity of a parent to think about their own and their child's mental states and how these mental states may influence behavior. Here we examined whether this capacity as measured by the Parental Reflective Functioning Questionnaire relates to tolerance of infant distress by asking mothers (N = 21) to soothe a life-like baby simulator (BSIM) that was inconsolable, crying for a fixed time period unless the mother chose to stop the interaction. Increasing maternal interest and curiosity in their child's mental states, a key feature of parental reflective functioning, was associated with longer persistence times with the BSIM. Importantly, on a non-parent distress tolerance task, parental reflective functioning was not related to persistence times. These findings suggest that parental reflective functioning may be related to tolerance of infant distress, but not distress tolerance more generally, and thus may reflect specificity to persistence behaviors in parenting contexts.
NASA Astrophysics Data System (ADS)
Park, Jaehong; Ren, Chuang; Workman, Jared C.; Blackman, Eric G.
2013-03-01
Low Mach number, high beta fast mode shocks can occur in the magnetic reconnection outflows of solar flares. These shocks, which occur above flare loop tops, may provide the electron energization responsible for some of the observed hard X-rays and contemporaneous radio emission. Here we present new two-dimensional particle-in-cell simulations of low Mach number/high beta quasi-perpendicular shocks. The simulations show that electrons above a certain energy threshold experience shock-drift-acceleration. The transition energy between the thermal and non-thermal spectrum and the spectral index from the simulations are consistent with some of the X-ray spectra from RHESSI in the energy regime of E <~ 40 ~ 100 keV. Plasma instabilities associated with the shock structure such as the modified-two-stream and the electron whistler instabilities are identified using numerical solutions of the kinetic dispersion relations. We also show that the results from PIC simulations with reduced ion/electron mass ratio can be scaled to those with the realistic mass ratio.
Baillie, D; St Aubin, J; Fallone, B; Steciw, S
2014-06-15
Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac.
NASA Astrophysics Data System (ADS)
Rekaa, V. L.; Chapman, S. C.; Dendy, R. O.
2014-08-01
Supernova remnant and heliopause termination shock plasmas may contain significant populations of minority heavy ions, with relative number densities n α/ni up to 50%. Preliminary kinetic simulations of collisionless shocks in these environments showed that the reformation cycle and acceleration mechanisms at quasi-perpendicular shocks can depend on the value of n α/ni . Shock reformation unfolds on ion spatio-temporal scales, requiring fully kinetic simulations of particle dynamics, together with the self-consistent electric and magnetic fields. This paper presents the first set of particle-in-cell simulations for two ion species, protons (np ) and α-particles (n α), with differing mass and charge-to-mass ratios, that spans the entire range of n α/ni from 0% to 100%. The interplay between the differing gyro length scales and timescales of the ion species is crucial to the time-evolving phenomenology of the shocks, the downstream turbulence, and the particle acceleration at different n α/ni . We show how the overall energization changes with n α/ni , and relate this to the processes individual ions undergo in the shock region and in the downstream turbulence, and to the power spectra of magnetic field fluctuations. The crossover between shocks dominated by the respective ion species happens when n α/ni = 25%, and minority ion energization is strongest in this regime. Energization of the majority ion species scales with injection energy. The power spectrum of the downstream turbulence includes peaks at sequential ion cyclotron harmonics, suggestive of ion ring-beam collective instability.
Bai, Qifeng; Shi, Danfeng; Zhang, Yang; Liu, Huanxiang; Yao, Xiaojun
2014-07-01
Corticotropin-releasing factor receptor 1 (CRF1R), a member of class B G-protein-coupled receptors (GPCRs), plays an important role in the treatment of osteoporosis, diabetes, depression, migraine and anxiety. To explore the escape pathway of the antagonist CP-376395 in the binding pocket of CRF1R, molecular dynamics (MD) simulations, dynamical network analysis, random acceleration molecular dynamics (RAMD) simulations and adaptive biasing force (ABF) calculations were performed on the crystal structure of CRF1R in complex with CP-376395. The results of dynamical network analysis show that TM7 of CRF1R has the strongest edges during MD simulation. The bent part of TM7 forms a V-shape pocket with Gly356(7.50). Asn283(5.50) has high hydrogen bond occupancy during 100 ns MD simulations and is the key interaction residue with the antagonist in the binding pocket of CRF1R. RAMD simulation has identified three possible pathways (PW1, PW2 and PW3) for CP-376395 to escape from the binding pocket of CRF1R. The PW3 pathway was proved to be the most likely escape pathway for CP-376395. The free energy along the PW3 pathway was calculated by using ABF simulations. Two energy barriers were found along the reaction coordinates. Residues Leu323(6.49), Asn283(5.50) and Met206(3.47) contribute to the steric hindrance for the first energy barrier. Residues His199(3.40) and Gln355(7.49) contribute to the second energy barrier through the hydrogen bonding interaction between CP-376395 and CRF1R. The results of our study can not only provide useful information to understand the interaction mechanism between CP-376395 and CRF1R, but also provide the details about the possible escape pathway and the free energy profile of CP-376395 in the pocket of CRF1R.
Angular velocities, angular accelerations, and coriolis accelerations
NASA Technical Reports Server (NTRS)
Graybiel, A.
1975-01-01
Weightlessness, rotating environment, and mathematical analysis of Coriolis acceleration is described for man's biological effective force environments. Effects on the vestibular system are summarized, including the end organs, functional neurology, and input-output relations. Ground-based studies in preparation for space missions are examined, including functional tests, provocative tests, adaptive capacity tests, simulation studies, and antimotion sickness.
John Womersley
2003-08-21
I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.
Fierro, Andrew Dickens, James; Neuber, Andreas
2014-12-15
A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.
NASA Technical Reports Server (NTRS)
Hung, R. J.; Pan, H. L.
1993-01-01
Some experimental spacecraft use superconducting sensors for gyro read-out and so must be maintained at a very low temperature. The boil-off from the cryogenic liquid used to cool the sensors can also be used, as the Gravity Probe B (GP-B) spacecraft does, as propellant to maintain attitude control and drag-free operation of the spacecraft. The cryogenic liquid for such spacecraft is, however, susceptible to both slosh-like motion and non-axisymmetric configurations under the influence of various kinds of gravity jitter and gravity gradient accelerations. Hence, it is important to quantify the magnitude of the liquid-induced perturbations on the spacecraft. We use the example of the GP-B to investigate such perturbations by numerical simulations. For this spacecraft disturbances can be imposed on the liquid by atmospheric drag, spacecraft attitude control maneuvers, and the earth's gravity gradient. More generally, onboard machinery vibrations and crew motion can also create disturbances. Recent studies suggest that high frequency disturbances are relatively unimportant in causing liquid motions in comparison to low frequency ones. The results presented here confirm this conclusion. After an initial calibration period, the GP-B spacecraft rotates in orbit at 0.1 rpm about the tank symmetry axis. For this rotation rate, the equilibrium liquid free surface shape is a 'doughnut' configuration for all residual gravity levels of 10(exp -6) g(sub 0) or less, as shown by experiments and by numerical simulations; furthermore, the superfluid behavior of the 1.8 K liquid helium used in GP-B eliminates temperature gradients and therefore such effects as Marangoni convection do not have to be considered. Classical fluid dynamics theory is used as the basis of the numerical simulations here, since Mason's experiments show that the theory is applicable for cryogenic liquid helium in large containers. To study liquid responses to various disturbances, we investigate and simulate
Wiebe, J; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4 (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.
Gokeri, Gurdal; Kocar, Cemil; Tombakoglu, Mehmet; Cecen, Yigit
2013-07-07
The usage of linear accelerator-generated x-rays for the stereotactic microbeam radiation therapy technique was evaluated in this study. Dose distributions were calculated with the Monte Carlo code MCNPX. Unidirectional single beams and beam arrays were simulated in a cylindrical water phantom to observe the effects of x-ray energies and irradiation geometry on dose distributions. Beam arrays were formed with square pencil beams. Two orthogonally interlaced beam arrays were simulated in a detailed head phantom and dose distributions were compared with ones which had been calculated for a bidirectional interlaced microbeam therapy (BIMRT) technique that uses synchrotron-generated x-rays. A parallel pattern of the beams was preserved through the phantom; however an unsegmented dose region could not be formed at the target. Five orthogonally interlaced beam array pairs (ten beam arrays) were simulated in a mathematical head phantom and the unsegmented dose region was formed. However, the dose fall-off distance is longer than the one that had been calculated for the BIMRT technique. Besides, the peak-to-dose ratios between the phantom's outer surface and the target region are lower. Therefore, the advantages of the MRT technique may not be preserved with the usage of a linac as the x-ray source.
Tamburini, M; Liseykina, T V; Pegoraro, F; Macchi, A
2012-01-01
Polarization and radiation reaction (RR) effects in the interaction of a superintense laser pulse (I>10(23) W cm-2) with a thin plasma foil are investigated with three dimensional particle-in-cell (PIC) simulations. For a linearly polarized laser pulse, strong anisotropies such as the formation of two high-energy clumps in the plane perpendicular to the propagation direction and significant radiation reactions effects are observed. On the contrary, neither anisotropies nor significant radiation reaction effects are observed using circularly polarized laser pulses, for which the maximum ion energy exceeds the value obtained in simulations of lower dimensionality. The dynamical bending of the initially flat plasma foil leads to the self-formation of a quasiparabolic shell that focuses the impinging laser pulse strongly increasing its energy and momentum densities.
2013-07-11
for nitrogen using molecular dynamics simulation”, 28th International Symposium on Rarefied Gas Dynamics, AIP Conf. Proc. 1501, 519-526 (2012); doi...accurate for highly nonequilibrium flows relevant to rarefied flows and sharp flow features with small length scales. Currently, both CFD and DSMC use...While such simulations are not expected to overlap with the 3D near-continuum flows in the near future, they certainly overlap with rarefied DSMC
Wang, Lilie L. W.; Leszczynski, Konrad
2007-02-15
The focal spot size and shape of a medical linac are important parameters that determine the dose profiles, especially in the penumbral region. A relationship between the focal spot size and the dose profile penumbra has been studied and established from simulation results of the EGSnrc Monte Carlo code. A simple method is proposed to estimate the size and the shape of a linac's focal spot from the measured dose profile data.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying
NASA Astrophysics Data System (ADS)
Ryne, Robert D.
2006-09-01
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.
Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology
Ryne, Robert D.
2006-08-10
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.
NASA Astrophysics Data System (ADS)
Moore, C. I.; Hafizi, B.; Ting, A.; Burris, H. R.; Sprangle, P.; Esarey, E.; Ganguly, A.; Hirshfield, J. L.
1997-11-01
The Vacuum Beat Wave Accelerator (VBWA) is a particle acceleration scheme which uses the non-linear ponderomotive beating of two different frequency laser beams to accelerate electrons. A proof-of-principle experiment to demonstrate the VBWA is underway at the Naval Research Laboratory (NRL). This experiment will use the beating of a 1054 nm and 527 nm laser pulse from the NRL T-cubed laser to generate the beat wave and a 4.5 MeV RF electron gun as the electron source. Simulation results and the experimental design will be presented. The suitability of using axicon or higher order Gaussian laser beams will also be discussed.
Barletta, William A.; /MIT
2008-09-01
Only a handful of universities in the US offer any formal training in accelerator science. The United States Particle Accelerator School (USPAS) is National Graduate Educational Program that has developed a highly successful educational paradigm that, over the past twenty-years, has granted more university credit in accelerator / beam science and technology than any university in the world. Sessions are held twice annually, hosted by major US research universities that approve course credit, certify the USPAS faculty, and grant course credit. The USPAS paradigm is readily extensible to other rapidly developing, crossdisciplinary research areas such as high energy density physics.
Barletta, William A.
2009-03-10
Only a handful of universities in the US offer any formal training in accelerator science. The United States Particle Accelerator School (USPAS) is National Graduate Educational Program that has developed a highly successful educational paradigm that, over the past twenty-years, has granted more university credit in accelerator/beam science and technology than any university in the world. Sessions are held twice annually, hosted by major US research universities that approve course credit, certify the USPAS faculty, and grant course credit. The USPAS paradigm is readily extensible to other rapidly developing, cross-disciplinary research areas such as high energy density physics.
Hemm, Simone; Pison, Daniela; Alonso, Fabiola; Shah, Ashesh; Coste, Jérôme; Lemaire, Jean-Jacques; Wårdell, Karin
2016-01-01
Despite an increasing use of deep brain stimulation (DBS) the fundamental mechanisms of action remain largely unknown. Simulation of electric entities has previously been proposed for chronic DBS combined with subjective symptom evaluations, but not for intraoperative stimulation tests. The present paper introduces a method for an objective exploitation of intraoperative stimulation test data to identify the optimal implant position of the chronic DBS lead by relating the electric field (EF) simulations to the patient-specific anatomy and the clinical effects quantified by accelerometry. To illustrate the feasibility of this approach, it was applied to five patients with essential tremor bilaterally implanted in the ventral intermediate nucleus (VIM). The VIM and its neighborhood structures were preoperatively outlined in 3D on white matter attenuated inversion recovery MR images. Quantitative intraoperative clinical assessments were performed using accelerometry. EF simulations (n = 272) for intraoperative stimulation test data performed along two trajectories per side were set-up using the finite element method for 143 stimulation test positions. The resulting EF isosurface of 0.2 V/mm was superimposed to the outlined anatomical structures. The percentage of volume of each structure's overlap was calculated and related to the corresponding clinical improvement. The proposed concept has been successfully applied to the five patients. For higher clinical improvements, not only the VIM but as well other neighboring structures were covered by the EF isosurfaces. The percentage of the volumes of the VIM, of the nucleus intermediate lateral of the thalamus and the prelemniscal radiations within the prerubral field of Forel increased for clinical improvements higher than 50% compared to improvements lower than 50%. The presented new concept allows a detailed and objective analysis of a high amount of intraoperative data to identify the optimal stimulation target. First
NASA Astrophysics Data System (ADS)
Moore, Alexander
This thesis begins with a description of a hybrid symplectic integrator named QYMSYM which is capable of planetary system simulations. This integrator has been programmed with the Compute Unified Device Architecture (CUDA) language which allows for implementation on Graphics Processing Units (GPUs). With the enhanced compute performance made available by this choice, QYMSYM was used to study the effects debris disks have on the dynamics of the extrasolar planetary systems HR 8799 and KOI-730. The four planet system HR 8799 was chosen because it was known to have relatively small regions of stability in orbital phase space. Using this fact, it can be shown that a simulated debris disk of moderate mass around HR 8799 can easily pull this system out of these regions of stability. In other cases it is possible to migrate the system to a region of stability - although this requires significantly more mass and a degree of fine tuning. These findings suggest that previous studies on the stability of HR 8799 which do not include a debris disk may not accurately report on the size and location of the stable orbital phase space available for the planets. This insight also calls into question the practice of using dynamical simulations to help constrain observed planetary orbital data. Next, by studying the stability of another four planet system, KOI-730, whose planets are in an 8:6:4:3 mean motion resonance, we were additionally able to determine mass constraints on debris disks for KOI-730 like Kepler objects. Noting that planet inclinations increase by a couple of degrees when migrating through a Neptune mass debris disk, and that planet candidates discovered by the Kepler Space Telescope are along the line of site, it is concluded that significant planetary migration did not occur among the Kepler objects. This result indicates that Kepler objects like KOI-730 have relatively small or stable debris disks which did not cause migration of their planets - ruling out late
NASA Astrophysics Data System (ADS)
Stuchebrov, S. G.; Miloichikova, I. A.; Krasnykh, A. A.
2016-07-01
In this paper the numerical simulation results of the dose spatial distribution of the medical electron beams in ABS-plastic doped with different concentrations of lead and zinc are shown. The dependences of the test material density on the lead and zinc mass concentrations are illustrated. The depth dose distributions of the medical electron beams in the modified ABS-plastic for three energies 6 MeV, 12 MeV and 20 MeV are tested. The electron beam shapes in the transverse plane in ABS-plastic doped with different concentrations of lead and zinc are presented.
NASA Astrophysics Data System (ADS)
Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
Simulations and Measurements for a concept of powering CALICE-AHCAL at a train-cycled accelerator
NASA Astrophysics Data System (ADS)
Göttlicher, P.
2013-01-01
Improving calorimetry by usage of the particle-flow algorithm requires to record the details of the shower development. Therefore a high granularity analogue readout hadron calorimeter (AHCAL) with small sensors and with electronics handling the enormous amount of channels, ≈ 40 000/m3, is required. Homogeneity is maintained by avoiding cooling tubes in the active volume and only cooling at the service end. For this concept low power consumption per channel, 40 μW, is essential. Future linear e+e-- collider designs, ILC or CLIC, foresee duty cycles for the bunch delivery. At ILC bunch trains of 1 ms duration are followed by long breaks of 200 ms. Power cycling the front end electronics with the train structure can reduce power consumption by a factor 100. However for a full scale CALICE-AHCAL switched currents reach magnitudes of kilo-amperes. This paper describes the design chain from front end PCB's through to external power supplies. By simulations a concept is developed, in which effects of electromagnetic interferences are kept small and localized. The goal is to keep current loops small, to limit the switched current to the region near the switched consumer and to allow only small frequency currents to spread out further into the system. By that analogue performance can be kept high and parasitic couplings to the surrounding metal structures and other sub-detectors will be minimized. Measurements with existing prototypes support the validity of the simulations.
Hemm, Simone; Pison, Daniela; Alonso, Fabiola; Shah, Ashesh; Coste, Jérôme; Lemaire, Jean-Jacques; Wårdell, Karin
2016-01-01
Despite an increasing use of deep brain stimulation (DBS) the fundamental mechanisms of action remain largely unknown. Simulation of electric entities has previously been proposed for chronic DBS combined with subjective symptom evaluations, but not for intraoperative stimulation tests. The present paper introduces a method for an objective exploitation of intraoperative stimulation test data to identify the optimal implant position of the chronic DBS lead by relating the electric field (EF) simulations to the patient-specific anatomy and the clinical effects quantified by accelerometry. To illustrate the feasibility of this approach, it was applied to five patients with essential tremor bilaterally implanted in the ventral intermediate nucleus (VIM). The VIM and its neighborhood structures were preoperatively outlined in 3D on white matter attenuated inversion recovery MR images. Quantitative intraoperative clinical assessments were performed using accelerometry. EF simulations (n = 272) for intraoperative stimulation test data performed along two trajectories per side were set-up using the finite element method for 143 stimulation test positions. The resulting EF isosurface of 0.2 V/mm was superimposed to the outlined anatomical structures. The percentage of volume of each structure’s overlap was calculated and related to the corresponding clinical improvement. The proposed concept has been successfully applied to the five patients. For higher clinical improvements, not only the VIM but as well other neighboring structures were covered by the EF isosurfaces. The percentage of the volumes of the VIM, of the nucleus intermediate lateral of the thalamus and the prelemniscal radiations within the prerubral field of Forel increased for clinical improvements higher than 50% compared to improvements lower than 50%. The presented new concept allows a detailed and objective analysis of a high amount of intraoperative data to identify the optimal stimulation target
Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua
2017-02-01
This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving.
Accelerator Technology Division annual report, FY 1989
Not Available
1990-06-01
This paper discusses: accelerator physics and special projects; experiments and injectors; magnetic optics and beam diagnostics; accelerator design and engineering; radio-frequency technology; accelerator theory and simulation; free-electron laser technology; accelerator controls and automation; and high power microwave sources and effects.
Capacity is the Wrong Paradigm
2002-01-01
Florida Gainesville, FL 32611-6120 ABSTRACT At present, \\capacity" is the prevailing paradigm for covert channels. With respect to steganography ...INTRODUCTION Steganography is the art and science of sending a hidden message from Alice to Bob, so that an eavesdropper is not aware that this hidden...discussed a di erent new paradigm con- cerning steganography . The concern of that new paradigm was \\when is something discovered." We feel that both
The WIMP Paradigm: Current Status
Feng, Jonathan
2011-03-23
The WIMP paradigm is the glue that joins together much of the high energy and cosmic frontiers. It postulates that most of the matter in the Universe is made of weakly-interacting massive particles, with implications for a broad range of experiments and observations. I will review the WIMP paradigm's underlying motivations, its current status in view of rapid experimental progress on several fronts, and recent theoretical variations on the WIMP paradigm theme.
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Toward new Drosophila paradigms.
Andrioli, Luiz Paulo
2012-08-01
The fruit fly Drosophila melanogaster is a great model system in developmental biology studies and related disciplines. In a historical perspective, I focus on the formation of the Drosophila segmental body plan using a comparative approach. I highlight the evolutionary trend of increasing complexity of the molecular segmentation network in arthropods that resulted in an incredible degree of complexity at the gap gene level in derived Diptera. There is growing evidence that Drosophila is a highly derived insect, and we are still far from fully understanding the underlying evolutionary mechanisms that led to its complexity. In addition, recent data have altered how we view the transcriptional regulatory mechanisms that control segmentation in Drosophila. However, these observations are not all bad news for the field. Instead, they stimulate further study of segmentation in Drosophila and in other species as well. To me, these seemingly new Drosophila paradigms are very challenging ones.
Cohen, J Craig; Larson, Janet E
2008-01-08
Genetic and environmental agents that disrupt organogenesis are numerous and well described. Less well established, however, is the role of delay in the developmental processes that yield functionally immature tissues at birth. Evidence is mounting that organs do not continue to develop postnatally in the context of these organogenesis insults, condemning the patient to utilize under-developed tissues for adult processes. These poorly differentiated organs may appear histologically normal at birth but with age may deteriorate revealing progressive or adult-onset pathology. The genetic and molecular underpinning of the proposed paradigm reveals the need for a comprehensive systems biology approach to evaluate the role of maternal-fetal environment on organogenesis."You may delay, but time will not" Benjamin Franklin, USA Founding Father.
Paradigms for parasite conservation.
Dougherty, Eric R; Carlson, Colin J; Bueno, Veronica M; Burgio, Kevin R; Cizauskas, Carrie A; Clements, Christopher F; Seidel, Dana P; Harris, Nyeema C
2016-08-01
Parasitic species, which depend directly on host species for their survival, represent a major regulatory force in ecosystems and a significant component of Earth's biodiversity. Yet the negative impacts of parasites observed at the host level have motivated a conservation paradigm of eradication, moving us farther from attainment of taxonomically unbiased conservation goals. Despite a growing body of literature highlighting the importance of parasite-inclusive conservation, most parasite species remain understudied, underfunded, and underappreciated. We argue the protection of parasitic biodiversity requires a paradigm shift in the perception and valuation of their role as consumer species, similar to that of apex predators in the mid-20th century. Beyond recognizing parasites as vital trophic regulators, existing tools available to conservation practitioners should explicitly account for the unique threats facing dependent species. We built upon concepts from epidemiology and economics (e.g., host-density threshold and cost-benefit analysis) to devise novel metrics of margin of error and minimum investment for parasite conservation. We define margin of error as the risk of accidental host extinction from misestimating equilibrium population sizes and predicted oscillations, while minimum investment represents the cost associated with conserving the additional hosts required to maintain viable parasite populations. This framework will aid in the identification of readily conserved parasites that present minimal health risks. To establish parasite conservation, we propose an extension of population viability analysis for host-parasite assemblages to assess extinction risk. In the direst cases, ex situ breeding programs for parasites should be evaluated to maximize success without undermining host protection. Though parasitic species pose a considerable conservation challenge, adaptations to conservation tools will help protect parasite biodiversity in the face of
Dielectric assist accelerating structure
NASA Astrophysics Data System (ADS)
Satoh, D.; Yoshida, M.; Hayashizaki, N.
2016-01-01
A higher-order TM02 n mode accelerating structure is proposed based on a novel concept of dielectric loaded rf cavities. This accelerating structure consists of ultralow-loss dielectric cylinders and disks with irises which are periodically arranged in a metallic enclosure. Unlike conventional dielectric loaded accelerating structures, most of the rf power is stored in the vacuum space near the beam axis, leading to a significant reduction of the wall loss, much lower than that of conventional normal-conducting linac structures. This allows us to realize an extremely high quality factor and a very high shunt impedance at room temperature. A simulation of a 5 cell prototype design with an existing alumina ceramic indicates an unloaded quality factor of the accelerating mode over 120 000 and a shunt impedance exceeding 650 M Ω /m at room temperature.
Naturalistic Inquiry: Paradigm and Method.
ERIC Educational Resources Information Center
Lotto, Linda S.
Despite the rhetoric acclaiming it as a new paradigm, educational researchers have tended to treat naturalistic inquiry as a new or alternative method employed within the dominant, rationalistic paradigm. Spokespersons for naturalistic inquiry tend to concentrate on what one does differently rather than how one perceives what one is doing…
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
NASA Astrophysics Data System (ADS)
Lee, Kyoung-Rok; Koo, Weoncheol; Kim, Moo-Hyun
2013-12-01
A floating Oscillating Water Column (OWC) wave energy converter, a Backward Bent Duct Buoy (BBDB), was simulated using a state-of-the-art, two-dimensional, fully-nonlinear Numerical Wave Tank (NWT) technique. The hydrodynamic performance of the floating OWC device was evaluated in the time domain. The acceleration potential method, with a full-updated kernel matrix calculation associated with a mode decomposition scheme, was implemented to obtain accurate estimates of the hydrodynamic force and displacement of a freely floating BBDB. The developed NWT was based on the potential theory and the boundary element method with constant panels on the boundaries. The mixed Eulerian-Lagrangian (MEL) approach was employed to capture the nonlinear free surfaces inside the chamber that interacted with a pneumatic pressure, induced by the time-varying airflow velocity at the air duct. A special viscous damping was applied to the chamber free surface to represent the viscous energy loss due to the BBDB's shape and motions. The viscous damping coefficient was properly selected using a comparison of the experimental data. The calculated surface elevation, inside and outside the chamber, with a tuned viscous damping correlated reasonably well with the experimental data for various incident wave conditions. The conservation of the total wave energy in the computational domain was confirmed over the entire range of wave frequencies.