Sample records for n-particle extended code

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  2. GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

    NASA Astrophysics Data System (ADS)

    N-Body Shop

    2017-10-01

    Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

  3. Cosmology in one dimension: Vlasov dynamics.

    PubMed

    Manfredi, Giovanni; Rouet, Jean-Louis; Miller, Bruce; Shiozawa, Yui

    2016-04-01

    Numerical simulations of self-gravitating systems are generally based on N-body codes, which solve the equations of motion of a large number of interacting particles. This approach suffers from poor statistical sampling in regions of low density. In contrast, Vlasov codes, by meshing the entire phase space, can reach higher accuracy irrespective of the density. Here, we perform one-dimensional Vlasov simulations of a long-standing cosmological problem, namely, the fractal properties of an expanding Einstein-de Sitter universe in Newtonian gravity. The N-body results are confirmed for high-density regions and extended to regions of low matter density, where the N-body approach usually fails.

  4. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discussmore » how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.« less

  5. Verification of long wavelength electromagnetic modes with a gyrokinetic-fluid hybrid model in the XGC code

    PubMed Central

    Lang, Jianying; Ku, S.; Chen, Y.; Parker, S. E.; Adams, M. F.

    2017-01-01

    As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons analogous to Chen and Parker [Phys. Plasmas 8, 441 (2001)]. Two representative long wavelength modes, shear Alfvén waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries. PMID:29104419

  6. The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.

    2014-02-01

    A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.

  7. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    DTIC Science & Technology

    2014-03-27

    VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6

  8. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    NASA Astrophysics Data System (ADS)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  9. Status of the R-matrix Code AMUR toward a consistent cross-section evaluation and covariance analysis for the light nuclei

    NASA Astrophysics Data System (ADS)

    Kunieda, Satoshi

    2017-09-01

    We report the status of the R-matrix code AMUR toward consistent cross-section evaluation and covariance analysis for the light-mass nuclei. The applicable limit of the code is extended by including computational capability for the charged-particle elastic scattering cross-sections and the neutron capture cross-sections as example results are shown in the main texts. A simultaneous analysis is performed on the 17O compound system including the 16O(n,tot) and 13C(α,n)16O reactions together with the 16O(n,n) and 13C(α,α) scattering cross-sections. It is found that a large theoretical background is required for each reaction process to obtain a simultaneous fit with all the experimental cross-sections we analyzed. Also, the hard-sphere radii should be assumed to be different from the channel radii. Although these are technical approaches, we could learn roles and sources of the theoretical background in the standard R-matrix.

  10. Comparison of fluence-to-dose conversion coefficients for deuterons, tritons and helions.

    PubMed

    Copeland, Kyle; Friedberg, Wallace; Sato, Tatsuhiko; Niita, Koji

    2012-02-01

    Secondary radiation in aircraft and spacecraft includes deuterons, tritons and helions. Two sets of fluence-to-effective dose conversion coefficients for isotropic exposure to these particles were compared: one used the particle and heavy ion transport code system (PHITS) radiation transport code coupled with the International Commission on Radiological Protection (ICRP) reference phantoms (PHITS-ICRP) and the other the Monte Carlo N-Particle eXtended (MCNPX) radiation transport code coupled with modified BodyBuilder™ phantoms (MCNPX-BB). Also, two sets of fluence-to-effective dose equivalent conversion coefficients calculated using the PHITS-ICRP combination were compared: one used quality factors based on linear energy transfer; the other used quality factors based on lineal energy (y). Finally, PHITS-ICRP effective dose coefficients were compared with PHITS-ICRP effective dose equivalent coefficients. The PHITS-ICRP and MCNPX-BB effective dose coefficients were similar, except at high energies, where MCNPX-BB coefficients were higher. For helions, at most energies effective dose coefficients were much greater than effective dose equivalent coefficients. For deuterons and tritons, coefficients were similar when their radiation weighting factor was set to 2.

  11. Modelling of aircrew radiation exposure from galactic cosmic rays and solar particle events.

    PubMed

    Takada, M; Lewis, B J; Boudreau, M; Al Anid, H; Bennett, L G I

    2007-01-01

    Correlations have been developed for implementation into the semi-empirical Predictive Code for Aircrew Radiation Exposure (PCAIRE) to account for effects of extremum conditions of solar modulation and low altitude based on transport code calculations. An improved solar modulation model, as proposed by NASA, has been further adopted to interpolate between the bounding correlations for solar modulation. The conversion ratio of effective dose to ambient dose equivalent, as applied to the PCAIRE calculation (based on measurements) for the legal regulation of aircrew exposure, was re-evaluated in this work to take into consideration new ICRP-92 radiation-weighting factors and different possible irradiation geometries of the source cosmic-radiation field. A computational analysis with Monte Carlo N-Particle eXtended Code was further used to estimate additional aircrew exposure that may result from sporadic solar energetic particle events considering real-time monitoring by the Geosynchronous Operational Environmental Satellite. These predictions were compared with the ambient dose equivalent rates measured on-board an aircraft and to count rate data observed at various ground-level neutron monitors.

  12. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6 [Production of energetic light fragments in CEM, LAQGSM, and MCNP6

    DOE PAGES

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.; ...

    2017-03-23

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less

  13. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6 [Production of energetic light fragments in CEM, LAQGSM, and MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less

  14. Collisionless stellar hydrodynamics as an efficient alternative to N-body methods

    NASA Astrophysics Data System (ADS)

    Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard

    2013-01-01

    The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.

  15. CUBE: Information-optimized parallel cosmological N-body simulation code

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin

    2018-05-01

    CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

  16. Simulation of Alfvén eigenmode bursts using a hybrid code for nonlinear magnetohydrodynamics and energetic particles

    NASA Astrophysics Data System (ADS)

    Todo, Y.; Berk, H. L.; Breizman, B. N.

    2012-03-01

    A hybrid simulation code for nonlinear magnetohydrodynamics (MHD) and energetic-particle dynamics has been extended to simulate recurrent bursts of Alfvén eigenmodes by implementing the energetic-particle source, collisions and losses. The Alfvén eigenmode bursts with synchronization of multiple modes and beam ion losses at each burst are successfully simulated with nonlinear MHD effects for the physics condition similar to a reduced simulation for a TFTR experiment (Wong et al 1991 Phys. Rev. Lett. 66 1874, Todo et al 2003 Phys. Plasmas 10 2888). It is demonstrated with a comparison between nonlinear MHD and linear MHD simulation results that the nonlinear MHD effects significantly reduce both the saturation amplitude of the Alfvén eigenmodes and the beam ion losses. Two types of time evolution are found depending on the MHD dissipation coefficients, namely viscosity, resistivity and diffusivity. The Alfvén eigenmode bursts take place for higher dissipation coefficients with roughly 10% drop in stored beam energy and the maximum amplitude of the dominant magnetic fluctuation harmonic δBm/n/B ~ 5 × 10-3 at the mode peak location inside the plasma. Quadratic dependence of beam ion loss rate on magnetic fluctuation amplitude is found for the bursting evolution in the nonlinear MHD simulation. For lower dissipation coefficients, the amplitude of the Alfvén eigenmodes is at steady levels δBm/n/B ~ 2 × 10-3 and the beam ion losses take place continuously. The beam ion pressure profiles are similar among the different dissipation coefficients, and the stored beam energy is higher for higher dissipation coefficients.

  17. A two-dimensional model of odd nitrogen in the thermosphere and mesosphere

    NASA Technical Reports Server (NTRS)

    Gerard, J. C.; Roble, R. G.; Rusch, D. W.

    1980-01-01

    Satellite measurements of the global nitric oxide distribution demonstrating the need for a two dimensional model of odd nitrogen photochemistry and transport in the thermosphere and mesosphere are reviewed. The main characteristics of a new code solving the transport equation for N(4S), N(2D), and N0 are given. This model extends from pole to pole between 75 and 275 km and reacts to the magnetic activity, the ultraviolet solar flux, and the neutral wind field. The effects of ionization and subsequent odd nitrogen production by high latitude particle precipitation are also included. Preliminary results are illustrated for a magnetically quiet solar minimum period with no neutral wind.

  18. Extension of the XGC code for global gyrokinetic simulations in stellarator geometry

    NASA Astrophysics Data System (ADS)

    Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock

    2017-10-01

    In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.

  19. ZENO: N-body and SPH Simulation Codes

    NASA Astrophysics Data System (ADS)

    Barnes, Joshua E.

    2011-02-01

    The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

  20. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  1. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  2. Global linear gyrokinetic simulations for LHD including collisions

    NASA Astrophysics Data System (ADS)

    Kauffmann, K.; Kleiber, R.; Hatzky, R.; Borchardt, M.

    2010-11-01

    The code EUTERPE uses a Particle-In-Cell (PIC) method to solve the gyrokinetic equation globally (full radius, full flux surface) for three-dimensional equilibria calculated with VMEC. Recently this code has been extended to include multiple kinetic species and electromagnetic effects. Additionally, a pitch-angle scattering operator has been implemented in order to include collisional effects in the simulation of instabilities and to be able to simulate neoclassical transport. As a first application of this extended code we study the effects of collisions on electrostatic ion-temperature-gradient (ITG) instabilities in LHD.

  3. Rotational Shear Effects on Edge Harmonic Oscillations in DIII-D Quiescent H-mode Discharges

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Burrell, K. H.; Ferraro, N. M.; Osborne, T. H.; Austin, M. E.; Garofalo, A. M.; Groebner, R. J.; Kramer, G. J.; Luhmann, N. C., Jr.; McKee, G. R.; Muscatello, C. M.; Nazikian, R.; Ren, X.; Snyder, P. B.; Solomon, Wm.; Tobias, B. J.; Yan, Z.

    2015-11-01

    In quiescent H-mode (QH) regime, the edge harmonic oscillations (EHO) play an important role in avoiding the transient ELM power fluxes by providing benign and continuous edge particle transport. A detailed theoretical, experimental and modeling comparison has been made of low-n (n <= 5) EHO in DIII-D QH-mode plasmas. The calculated linear eigenmode structure from the extended MHD code M3D-C1 matches closely the coherent EHO properties from external magnetics data and internal measurements using the ECE, BES, ECE-I and MIR diagnostics, as well as the kink/peeling mode properties of the ideal MHD code ELITE. The numerical investigations indicate that the low-n EHO-like solutions from M3D-C1 are destabilized by the toroidal rotational shear while high-n modes are stabilized. This effect is independent of the rotation direction, suggesting that the low-n EHO can be destabilized in principle with rotation in both directions. These modeling results are consistent with experimental observations of the EHO and support the proposed theory of the EHO as a rotational shear driven kink/peeling mode.

  4. Calculation of spherical harmonics and Wigner d functions by FFT. Applications to fast rotational matching in molecular replacement and implementation into AMoRe.

    PubMed

    Trapani, Stefano; Navaza, Jorge

    2006-07-01

    The FFT calculation of spherical harmonics, Wigner D matrices and rotation function has been extended to all angular variables in the AMoRe molecular replacement software. The resulting code avoids singularity issues arising from recursive formulas, performs faster and produces results with at least the same accuracy as the original code. The new code aims at permitting accurate and more rapid computations at high angular resolution of the rotation function of large particles. Test calculations on the icosahedral IBDV VP2 subviral particle showed that the new code performs on the average 1.5 times faster than the original code.

  5. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  6. Full-f version of GENE for turbulence in open-field-line systems

    NASA Astrophysics Data System (ADS)

    Pan, Q.; Told, D.; Shi, E. L.; Hammett, G. W.; Jenko, F.

    2018-06-01

    Unique properties of plasmas in the tokamak edge, such as large amplitude fluctuations and plasma-wall interactions in the open-field-line regions, require major modifications of existing gyrokinetic codes originally designed for simulating core turbulence. To this end, the global version of the 3D2V gyrokinetic code GENE, so far employing a δf-splitting technique, is extended to simulate electrostatic turbulence in straight open-field-line systems. The major extensions are the inclusion of the velocity-space nonlinearity, the development of a conducting-sheath boundary, and the implementation of the Lenard-Bernstein collision operator. With these developments, the code can be run as a full-f code and can handle particle loss to and reflection from the wall. The extended code is applied to modeling turbulence in the Large Plasma Device (LAPD), with a reduced mass ratio and a much lower collisionality. Similar to turbulence in a tokamak scrape-off layer, LAPD turbulence involves collisions, parallel streaming, cross-field turbulent transport with steep profiles, and particle loss at the parallel boundary.

  7. Accelerating NBODY6 with graphics processing units

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Aarseth, Sverre J.

    2012-07-01

    We describe the use of graphics processing units (GPUs) for speeding up the code NBODY6 which is widely used for direct N-body simulations. Over the years, the N2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 per cent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present NBODY6-GPU code is well balanced for simulations in the particle range 104-2 × 105 for a dual-GPU system attached to a standard PC.

  8. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  9. Study of premixing phase of steam explosion with JASMINE code in ALPHA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu

    Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less

  10. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  11. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  12. Multiuser Transmit Beamforming for Maximum Sum Capacity in Tactical Wireless Multicast Networks

    DTIC Science & Technology

    2006-08-01

    commonly used extended Kalman filter . See [2, 5, 6] for recent tutorial overviews. In particle filtering , continuous distributions are approximated by...signals (using and developing associated particle filtering tools). Our work on these topics has been reported in seven (IEEE, SIAM) journal papers and...multidimensional scaling, tracking, intercept, particle filters . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT 18. SECURITY CLASSIFICATION OF

  13. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  14. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  15. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  16. LIGKA: A linear gyrokinetic code for the description of background kinetic and fast particle effects on the MHD stability in tokamaks

    NASA Astrophysics Data System (ADS)

    Lauber, Ph.; Günter, S.; Könies, A.; Pinches, S. D.

    2007-09-01

    In a plasma with a population of super-thermal particles generated by heating or fusion processes, kinetic effects can lead to the additional destabilisation of MHD modes or even to additional energetic particle modes. In order to describe these modes, a new linear gyrokinetic MHD code has been developed and tested, LIGKA (linear gyrokinetic shear Alfvén physics) [Ph. Lauber, Linear gyrokinetic description of fast particle effects on the MHD stability in tokamaks, Ph.D. Thesis, TU München, 2003; Ph. Lauber, S. Günter, S.D. Pinches, Phys. Plasmas 12 (2005) 122501], based on a gyrokinetic model [H. Qin, Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks, Ph.D. Thesis, Princeton University, 1998]. A finite Larmor radius expansion together with the construction of some fluid moments and specification to the shear Alfvén regime results in a self-consistent, electromagnetic, non-perturbative model, that allows not only for growing or damped eigenvalues but also for a change in mode-structure of the magnetic perturbation due to the energetic particles and background kinetic effects. Compared to previous implementations [H. Qin, mentioned above], this model is coded in a more general and comprehensive way. LIGKA uses a Fourier decomposition in the poloidal coordinate and a finite element discretisation in the radial direction. Both analytical and numerical equilibria can be treated. Integration over the unperturbed particle orbits is performed with the drift-kinetic HAGIS code [S.D. Pinches, Ph.D. Thesis, The University of Nottingham, 1996; S.D. Pinches et al., CPC 111 (1998) 131] which accurately describes the particles' trajectories. This allows finite-banana-width effects to be implemented in a rigorous way since the linear formulation of the model allows the exchange of the unperturbed orbit integration and the discretisation of the perturbed potentials in the radial direction. Successful benchmarks for toroidal Alfvén eigenmodes (TAEs) and kinetic Alfvén waves (KAWs) with analytical results, ideal MHD codes, drift-kinetic codes and other codes based on kinetic models are reported.

  17. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    The quantum color coding scheme proposed by Korff and Kempe [e-print quant-ph/0405086] is easily extended so that the color coding quantum system is allowed to be entangled with an extra auxiliary quantum system. It is shown that in the extended scheme we need only {approx}2{radical}(N) quantum colors to order N objects in large N limit, whereas {approx}N/e quantum colors are required in the original nonextended version. The maximum success probability has asymptotics expressed by the Tracy-Widom distribution of the largest eigenvalue of a random Gaussian unitary ensemble (GUE) matrix.

  19. Prompt Radiation Protection Factors

    DTIC Science & Technology

    2018-02-01

    dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat

  20. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  1. NTRFACE for MAGIC

    DTIC Science & Technology

    1989-07-31

    40. NO NO ACCESSION NO N7 ?I TITLE (inWijuod Security Claisification) NTRFACE FOR MAGIC 𔃼 PERSONAL AUTHOR(S) N.T. GLADD PE OF REPORT T b TIME...the MAGIC Particle-in-Cell Simulation Code. 19 ABSTRACT (Contianue on reverse if nceary and d ntiy by block number) The NTRFACE system was developed...made concret by applying it to a specific application- a mature, highly complex plasma physics particle in cell simulation code name MAGIC . This

  2. Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.

    2010-12-01

    The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.

  3. Relating quantum discord with the quantum dense coding capacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Qiu, Liang, E-mail: lqiu@cumt.edu.cn; Li, Song

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  4. Confinement properties of tokamak plasmas with extended regions of low magnetic shear

    NASA Astrophysics Data System (ADS)

    Graves, J. P.; Cooper, W. A.; Kleiner, A.; Raghunathan, M.; Neto, E.; Nicolas, T.; Lanthaler, S.; Patten, H.; Pfefferle, D.; Brunetti, D.; Lutjens, H.

    2017-10-01

    Extended regions of low magnetic shear can be advantageous to tokamak plasmas. But the core and edge can be susceptible to non-resonant ideal fluctuations due to the weakened restoring force associated with magnetic field line bending. This contribution shows how saturated non-linear phenomenology, such as 1 / 1 Long Lived Modes, and Edge Harmonic Oscillations associated with QH-modes, can be modelled accurately using the non-linear stability code XTOR, the free boundary 3D equilibrium code VMEC, and non-linear analytic theory. That the equilibrium approach is valid is particularly valuable because it enables advanced particle confinement studies to be undertaken in the ordinarily difficult environment of strongly 3D magnetic fields. The VENUS-LEVIS code exploits the Fourier description of the VMEC equilibrium fields, such that full Lorenzian and guiding centre approximated differential operators in curvilinear angular coordinates can be evaluated analytically. Consequently, the confinement properties of minority ions such as energetic particles and high Z impurities can be calculated accurately over slowing down timescales in experimentally relevant 3D plasmas.

  5. Implementation of the 3D edge plasma code EMC3-EIRENE on NSTX

    DOE PAGES

    Lore, J. D.; Canik, J. M.; Feng, Y.; ...

    2012-05-09

    The 3D edge transport code EMC3-EIRENE has been applied for the first time to the NSTX spherical tokamak. A new disconnected double null grid has been developed to allow the simulation of plasma where the radial separation of the inner and outer separatrix is less than characteristic widths (e.g. heat flux width) at the midplane. Modelling results are presented for both an axisymmetric case and a case where 3D magnetic field is applied in an n = 3 configuration. In the vacuum approximation, the perturbed field consists of a wide region of destroyed flux surfaces and helical lobes which aremore » a mixture of long and short connection length field lines formed by the separatrix manifolds. This structure is reflected in coupled 3D plasma fluid (EMC3) and kinetic neutral particle (EIRENE) simulations. The helical lobes extending inside of the unperturbed separatrix are filled in by hot plasma from the core. The intersection of the lobes with the divertor results in a striated flux footprint pattern on the target plates. As a result, profiles of divertor heat and particle fluxes are compared with experimental data, and possible sources of discrepancy are discussed.« less

  6. Matter power spectrum and the challenge of percent accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less

  7. Rotational shear effects on edge harmonic oscillations in DIII-D quiescent H-mode discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Burrell, Keith H.; Ferraro, Nathaniel M.

    In the quiescent H-mode (QH-mode) regime, edge harmonic oscillations (EHO) play an important role in avoiding transient edge localized mode (ELM) power fluxes by providing benign and continuous edge particle transport. A detailed theoretical, experimental and modeling comparison has been made of low-n (n ≤ 5) EHO in DIII-D QH-mode plasmas. The calculated linear eigenmode structure from the extended MHD code M3D-C1 matches closely the coherent EHO properties from external magnetics data and internal measurements using the ECE, BES, ECE-Imaging and microwave imaging reflectometer (MIR) diagnostics, as well as the kink/peeling mode properties found by the ideal MHD code ELITE.more » Numerical investigations indicate that the low-n EHO-like solutions from M3D-C1 are destabilized by the rotational shear while high-n modes are stabilized. This effect is independent of the rotation direction, suggesting that EHO can be destabilized in principle with rotation in either direction. Furthermore, the modeling results are consistent with observations of the EHO, support the proposed theory of the EHO as a rotational shear driven kink/peeling mode, and improve our understanding and confidence in creating and sustaining QH-mode in present and future devices.« less

  8. Rotational shear effects on edge harmonic oscillations in DIII-D quiescent H-mode discharges

    DOE PAGES

    Chen, Xi; Burrell, Keith H.; Ferraro, Nathaniel M.; ...

    2016-06-21

    In the quiescent H-mode (QH-mode) regime, edge harmonic oscillations (EHO) play an important role in avoiding transient edge localized mode (ELM) power fluxes by providing benign and continuous edge particle transport. A detailed theoretical, experimental and modeling comparison has been made of low-n (n ≤ 5) EHO in DIII-D QH-mode plasmas. The calculated linear eigenmode structure from the extended MHD code M3D-C1 matches closely the coherent EHO properties from external magnetics data and internal measurements using the ECE, BES, ECE-Imaging and microwave imaging reflectometer (MIR) diagnostics, as well as the kink/peeling mode properties found by the ideal MHD code ELITE.more » Numerical investigations indicate that the low-n EHO-like solutions from M3D-C1 are destabilized by the rotational shear while high-n modes are stabilized. This effect is independent of the rotation direction, suggesting that EHO can be destabilized in principle with rotation in either direction. Furthermore, the modeling results are consistent with observations of the EHO, support the proposed theory of the EHO as a rotational shear driven kink/peeling mode, and improve our understanding and confidence in creating and sustaining QH-mode in present and future devices.« less

  9. Rotational shear effects on edge harmonic oscillations in DIII-D quiescent H-mode discharges

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Burrell, K. H.; Ferraro, N. M.; Osborne, T. H.; Austin, M. E.; Garofalo, A. M.; Groebner, R. J.; Kramer, G. J.; Luhmann, N. C., Jr.; McKee, G. R.; Muscatello, C. M.; Nazikian, R.; Ren, X.; Snyder, P. B.; Solomon, W. M.; Tobias, B. J.; Yan, Z.

    2016-07-01

    In the quiescent H-mode (QH-mode) regime, edge harmonic oscillations (EHOs) play an important role in avoiding transient edge localized mode (ELM) power fluxes by providing benign and continuous edge particle transport. A detailed theoretical, experimental and modeling comparison has been made of low-n (n  ⩽  5) EHO in DIII-D QH-mode plasmas. The calculated linear eigenmode structure from the extended magentoohydrodynamics (MHD) code M3D-C1 matches closely the coherent EHO properties from external magnetics data and internal measurements using the ECE, BES, ECE-Imaging and microwave imaging reflectometer (MIR) diagnostics, as well as the kink/peeling mode properties found by the ideal MHD code ELITE. Numerical investigations indicate that the low-n EHO-like solutions from M3D-C1 are destabilized by rotation and/or rotational shear while high-n modes are stabilized. This effect is independent of the rotation direction, suggesting that EHOs can be destabilized in principle with rotation in either direction. The modeling results are consistent with observations of EHO, support the proposed theory of the EHO as a low-n kink/peeling mode destabilized by edge E  ×  B rotational shear, and improve our understanding and confidence in creating and sustaining QH-mode in present and future devices.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less

  11. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  12. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  13. Verification of long wavelength electromagnetic modes with a gyrokinetic-fluid hybrid model in the XGC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert; Lang, Jianying; Chang, C. S.

    As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons. Here, two representative long wavelength modes, shear Alfven waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries.

  14. Verification of long wavelength electromagnetic modes with a gyrokinetic-fluid hybrid model in the XGC code

    DOE PAGES

    Hager, Robert; Lang, Jianying; Chang, C. S.; ...

    2017-05-24

    As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons. Here, two representative long wavelength modes, shear Alfven waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries.

  15. Extension of applicable neutron energy of DARWIN up to 1 GeV.

    PubMed

    Satoh, D; Sato, T; Endo, A; Matsufuji, N; Takada, M

    2007-01-01

    The radiation-dose monitor, DARWIN, needs a set of response functions of the liquid organic scintillator to assess a neutron dose. SCINFUL-QMD is a Monte Carlo based computer code to evaluate the response functions. In order to improve the accuracy of the code, a new light-output function based on the experimental data was developed for the production and transport of protons deuterons, tritons, (3)He nuclei and alpha particles, and incorporated into the code. The applicable energy of DARWIN was extended to 1 GeV using the response functions calculated by the modified SCINFUL-QMD code.

  16. N-MODY: a code for collisionless N-body simulations in modified Newtonian dynamics.

    NASA Astrophysics Data System (ADS)

    Londrillo, P.; Nipoti, C.

    We describe the numerical code N-MODY, a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

  17. Dual neutral particle induced transmutation in CINDER2008

    NASA Astrophysics Data System (ADS)

    Martin, W. J.; de Oliveira, C. R. E.; Hecht, A. A.

    2014-12-01

    Although nuclear transmutation methods for fission have existed for decades, the focus has been on neutron-induced reactions. Recent novel concepts have sought to use both neutrons and photons for purposes such as active interrogation of cargo to detect the smuggling of highly enriched uranium, a concept that would require modeling the transmutation caused by both incident particles. As photonuclear transmutation has yet to be modeled alongside neutron-induced transmutation in a production code, new methods need to be developed. The CINDER2008 nuclear transmutation code from Los Alamos National Laboratory is extended from neutron applications to dual neutral particle applications, allowing both neutron- and photon-induced reactions for this modeling with a focus on fission. Following standard reaction modeling, the induced fission reaction is understood as a two-part reaction, with an entrance channel to the excited compound nucleus, and an exit channel from the excited compound nucleus to the fission fragmentation. Because photofission yield data-the exit channel from the compound nucleus-are sparse, neutron fission yield data are used in this work. With a different compound nucleus and excitation, the translation to the excited compound state is modified, as appropriate. A verification and validation of these methods and data has been performed. This has shown that the translation of neutron-induced fission product yield sets, and their use in photonuclear applications, is appropriate, and that the code has been extended correctly.

  18. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  19. Testing and Validating Gadget2 for GPUs

    NASA Astrophysics Data System (ADS)

    Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.

    2013-01-01

    We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.

  20. Performance tuning of N-body codes on modern microprocessors: I. Direct integration with a hermite scheme on x86_64 architecture

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro; Hut, Piet

    2006-12-01

    The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.

  1. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less

  2. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.

  3. Fully-kinetic Ion Simulation of Global Electrostatic Turbulent Transport in C-2U

    NASA Astrophysics Data System (ADS)

    Fulton, Daniel; Lau, Calvin; Bao, Jian; Lin, Zhihong; Tajima, Toshiki; TAE Team

    2017-10-01

    Understanding the nature of particle and energy transport in field-reversed configuration (FRC) plasmas is a crucial step towards an FRC-based fusion reactor. The C-2U device at Tri Alpha Energy (TAE) achieved macroscopically stable plasmas and electron energy confinement time which scaled favorably with electron temperature. This success led to experimental and theoretical investigation of turbulence in C-2U, including gyrokinetic ion simulations with the Gyrokinetic Toroidal Code (GTC). A primary objective of TAE's new C-2W device is to explore transport scaling in an extended parameter regime. In concert with the C-2W experimental campaign, numerical efforts have also been extended in A New Code (ANC) to use fully-kinetic (FK) ions and a Vlasov-Poisson field solver. Global FK ion simulations are presented. Future code development is also discussed.

  4. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less

  5. Particle Number Dependence of the N-body Simulations of Moon Formation

    NASA Astrophysics Data System (ADS)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  6. Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method

    NASA Technical Reports Server (NTRS)

    Winckelmans, G. S.

    1995-01-01

    This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.

  7. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  8. An algebraic hypothesis about the primeval genetic code architecture.

    PubMed

    Sánchez, Robersy; Grau, Ricardo

    2009-09-01

    A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D,A,C,G,U}, where symbol D represents one or more hypothetical bases with unspecific pairings. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvement of a primeval DNA repair system could make possible the transition from ancient to modern genetic codes. Our results suggest that the Watson-Crick base pairing G identical with C and A=U and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as, the transition from the former to the latter. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences. The phylogenetic analyses achieved with metrics defined in the N-dimensional vector space (B(3))(N) of DNA sequences and with the new evolutionary model presented here also suggest that an ancient DNA coding sequence with five or more bases does not contradict the expected evolutionary history.

  9. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  10. Impact of velocity space distribution on hybrid kinetic-magnetohydrodynamic simulation of the (1,1) mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Charlson C.

    2008-07-15

    Numeric studies of the impact of the velocity space distribution on the stabilization of (1,1) internal kink mode and excitation of the fishbone mode are performed with a hybrid kinetic-magnetohydrodynamic model. These simulations demonstrate an extension of the physics capabilities of NIMROD[C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)], a three-dimensional extended magnetohydrodynamic (MHD) code, to include the kinetic effects of an energetic minority ion species. Kinetic effects are captured by a modification of the usual MHD momentum equation to include a pressure tensor calculated from the {delta}f particle-in-cell method [S. E. Parker and W. W. Lee,more » Phys. Fluids B 5, 77 (1993)]. The particles are advanced in the self-consistent NIMROD fields. We outline the implementation and present simulation results of energetic minority ion stabilization of the (1,1) internal kink mode and excitation of the fishbone mode. A benchmark of the linear growth rate and real frequency is shown to agree well with another code. The impact of the details of the velocity space distribution is examined; particularly extending the velocity space cutoff of the simulation particles. Modestly increasing the cutoff strongly impacts the (1,1) mode. Numeric experiments are performed to study the impact of passing versus trapped particles. Observations of these numeric experiments suggest that assumptions of energetic particle effects should be re-examined.« less

  11. Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.

    1997-01-01

    As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.

  12. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  13. Improving fast generation of halo catalogues with higher order Lagrangian perturbation theory

    NASA Astrophysics Data System (ADS)

    Munari, Emiliano; Monaco, Pierluigi; Sefusatti, Emiliano; Castorina, Emanuele; Mohammad, Faizan G.; Anselmi, Stefano; Borgani, Stefano

    2017-03-01

    We present the latest version of PINOCCHIO, a code that generates catalogues of dark matter haloes in an approximate but fast way with respect to an N-body simulation. This code version implements a new on-the-fly production of halo catalogue on the past light cone with continuous time sampling, and the computation of particle and halo displacements are extended up to third-order Lagrangian perturbation theory (LPT), in contrast with previous versions that used Zel'dovich approximation. We run PINOCCHIO on the same initial configuration of a reference N-body simulation, so that the comparison extends to the object-by-object level. We consider haloes at redshifts 0 and 1, using different LPT orders either for halo construction or to compute halo final positions. We compare the clustering properties of PINOCCHIO haloes with those from the simulation by computing the power spectrum and two-point correlation function in real and redshift space (monopole and quadrupole), the bispectrum and the phase difference of halo distributions. We find that 2LPT and 3LPT give noticeable improvement. 3LPT provides the best agreement with N-body when it is used to displace haloes, while 2LPT gives better results for constructing haloes. At the highest orders, linear bias is typically recovered at a few per cent level. In Fourier space and using 3LPT for halo displacements, the halo power spectrum is recovered to within 10 per cent up to kmax ∼ 0.5 h Mpc-1. The results presented in this paper have interesting implications for the generation of large ensemble of mock surveys for the scientific exploitation of data from big surveys.

  14. Uncoupling cis-Acting RNA Elements from Coding Sequences Revealed a Requirement of the N-Terminal Region of Dengue Virus Capsid Protein in Virus Particle Formation

    PubMed Central

    Samsa, Marcelo M.; Mondotte, Juan A.; Caramelo, Julio J.

    2012-01-01

    Little is known about the mechanism of flavivirus genome encapsidation. Here, functional elements of the dengue virus (DENV) capsid (C) protein were investigated. Study of the N-terminal region of DENV C has been limited by the presence of overlapping cis-acting RNA elements within the protein-coding region. To dissociate these two functions, we used a recombinant DENV RNA with a duplication of essential RNA structures outside the C coding sequence. By the use of this system, the highly conserved amino acids FNML, which are encoded in the RNA cyclization sequence 5′CS, were found to be dispensable for C function. In contrast, deletion of the N-terminal 18 amino acids of C impaired DENV particle formation. Two clusters of basic residues (R5-K6-K7-R9 and K17-R18-R20-R22) were identified as important. A systematic mutational analysis indicated that a high density of positive charges, rather than particular residues at specific positions, was necessary. Furthermore, a differential requirement of N-terminal sequences of C for viral particle assembly was observed in mosquito and human cells. While no viral particles were observed in human cells with a virus lacking the first 18 residues of C, DENV propagation was detected in mosquito cells, although to a level about 50-fold less than that observed for a wild-type (WT) virus. We conclude that basic residues at the N terminus of C are necessary for efficient particle formation in mosquito cells but that they are crucial for propagation in human cells. This is the first report demonstrating that the N terminus of C plays a role in DENV particle formation. In addition, our results suggest that this function of C is differentially modulated in different host cells. PMID:22072762

  15. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  16. Feasibility study for using an extended three-wave model to simulate plasma-based backward Raman amplification in one spatial dimension

    NASA Astrophysics Data System (ADS)

    Wang, T.-L.; Michta, D.; Lindberg, R. R.; Charman, A. E.; Martins, S. F.; Wurtele, J. S.

    2009-12-01

    Results are reported of a one-dimensional simulation study comparing the modeling capability of a recently formulated extended three-wave model [R. R. Lindberg, A. E. Charman, and J. S. Wurtele, Phys. Plasmas 14, 122103 (2007); Phys. Plasmas 15, 055911 (2008)] to that of a particle-in-cell (PIC) code, as well as to a more conventional three-wave model, in the context of the plasma-based backward Raman amplification (PBRA) [G. Shvets, N. J. Fisch, A. Pukhov et al., Phys. Rev. Lett. 81, 4879 (1998); V. M. Malkin, G. Shvets, and N. J. Fisch, Phys. Rev. Lett. 82, 4448 (1999); Phys. Rev. Lett. 84, 1208 (2000)]. The extended three-wave model performs essentially as well as or better than a conventional three-wave description in all temperature regimes tested, and significantly better at the higher temperatures studied, while the computational savings afforded by the extended three-wave model make it a potentially attractive tool that can be used prior to or in conjunction with PIC simulations to model the kinetic effects of PBRA for nonrelativistic laser pulses interacting with underdense thermal plasmas. Very fast but reasonably accurate at moderate plasma temperatures, this model may be used to perform wide-ranging parameter scans or other exploratory analyses quickly and efficiently, in order to guide subsequent simulation via more accurate if intensive PIC techniques or other algorithms approximating the full Vlasov-Maxwell equations.

  17. Coupled Kinetic-MHD Simulations of Divertor Heat Load with ELM Perturbations

    NASA Astrophysics Data System (ADS)

    Cummings, Julian; Chang, C. S.; Park, Gunyoung; Sugiyama, Linda; Pankin, Alexei; Klasky, Scott; Podhorszki, Norbert; Docan, Ciprian; Parashar, Manish

    2010-11-01

    The effect of Type-I ELM activity on divertor plate heat load is a key component of the DOE OFES Joint Research Target milestones for this year. In this talk, we present simulations of kinetic edge physics, ELM activity, and the associated divertor heat loads in which we couple the discrete guiding-center neoclassical transport code XGC0 with the nonlinear extended MHD code M3D using the End-to-end Framework for Fusion Integrated Simulations, or EFFIS. In these coupled simulations, the kinetic code and the MHD code run concurrently on the same massively parallel platform and periodic data exchanges are performed using a memory-to-memory coupling technology provided by EFFIS. The M3D code models the fast ELM event and sends frequent updates of the magnetic field perturbations and electrostatic potential to XGC0, which in turn tracks particle dynamics under the influence of these perturbations and collects divertor particle and energy flux statistics. We describe here how EFFIS technologies facilitate these coupled simulations and discuss results for DIII-D, NSTX and Alcator C-Mod tokamak discharges.

  18. Progress on the Development of the hPIC Particle-in-Cell Code

    NASA Astrophysics Data System (ADS)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  19. N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

    NASA Astrophysics Data System (ADS)

    Londrillo, Pasquale; Nipoti, Carlo

    2011-02-01

    N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

  20. Application of electron closures in extended MHD

    NASA Astrophysics Data System (ADS)

    Held, Eric; Adair, Brett; Taylor, Trevor

    2017-10-01

    Rigorous closure of the extended MHD equations in plasma fluid codes includes the effects of electron heat conduction along perturbed magnetic fields and contributions of the electron collisional friction and stress to the extended Ohms law. In this work we discuss application of a continuum numerical solution to the Chapman-Enskog-like electron drift kinetic equation using the NIMROD code. The implementation is a tightly-coupled fluid/kinetic system that carefully addresses time-centering in the advance of the fluid variables with their kinetically-computed closures. Comparisons of spatial accuracy, computational efficiency and required velocity space resolution are presented for applications involving growing magnetic islands in cylindrical and toroidal geometry. The reduction in parallel heat conduction due to particle trapping in toroidal geometry is emphasized. Work supported by DOE under Grant Nos. DE-FC02-08ER54973 and DE-FG02-04ER54746.

  1. Computed secondary-particle energy spectra following nonelastic neutron interactions with C-12 for E(n) between 15 and 60 MeV: Comparisons of results from two calculational methods

    NASA Astrophysics Data System (ADS)

    Dickens, J. K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d(sigma)/dE, following nonelastic neutron interactions with C-12 for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed.

  2. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  3. Numerical Analysis of Dusty-Gas Flows

    NASA Astrophysics Data System (ADS)

    Saito, T.

    2002-02-01

    This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.

  4. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Cranmer, Steven R.; Wagner, William (Technical Monitor)

    2004-01-01

    The PI (Cranmer) and Co-I (A. van Ballegooijen) made substantial progress toward the goal of producing a unified model of the basic physical processes responsible for solar wind acceleration. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a 1D model of plasma heating and acceleration. The accomplishments in Year 2 are divided into these two categories: 1a. Focused Study of Kinetic Magnetohydrodynamic (MHD) Turbulence. lb. Focused Study of Non - WKB Alfven Wave Rejection. and 2. The Unified Model Code. We have continued the development of the computational model of a time-study open flux tube in the extended corona. The proton-electron Monte Carlo model is being tested, and collisionless wave-particle interactions are being included. In order to better understand how to easily incorporate various kinds of wave-particle processes into the code, the PI performed a detailed study of the so-called "Ito Calculus", i.e., the mathematical theory of how to update the positions of particles in a probabilistic manner when their motions are governed by diffusion in velocity space.

  5. Dynamic divertor control using resonant mixed toroidal harmonic magnetic fields during ELM suppression in DIII-D

    NASA Astrophysics Data System (ADS)

    Jia, M.; Sun, Y.; Paz-Soldan, C.; Nazikian, R.; Gu, S.; Liu, Y. Q.; Abrams, T.; Bykov, I.; Cui, L.; Evans, T.; Garofalo, A.; Guo, W.; Gong, X.; Lasnier, C.; Logan, N. C.; Makowski, M.; Orlov, D.; Wang, H. H.

    2018-05-01

    Experiments using Resonant Magnetic Perturbations (RMPs), with a rotating n = 2 toroidal harmonic combined with a stationary n = 3 toroidal harmonic, have validated predictions that divertor heat and particle flux can be dynamically controlled while maintaining Edge Localized Mode (ELM) suppression in the DIII-D tokamak. Here, n is the toroidal mode number. ELM suppression over one full cycle of a rotating n = 2 RMP that was mixed with a static n = 3 RMP field has been achieved. Prominent heat flux splitting on the outer divertor has been observed during ELM suppression by RMPs in low collisionality regime in DIII-D. Strong changes in the three dimensional heat and particle flux footprint in the divertor were observed during the application of the mixed toroidal harmonic magnetic perturbations. These results agree well with modeling of the edge magnetic field structure using the TOP2D code, which takes into account the plasma response from the MARS-F code. These results expand the potential effectiveness of the RMP ELM suppression technique for the simultaneous control of divertor heat and particle load required in ITER.

  6. Saturation of Alfvén modes in tokamaks

    DOE PAGES

    White, Roscoe; Gorelenkov, Nikolai; Gorelenkova, Marina; ...

    2016-09-20

    Here, the growth of Alfvén modes driven unstable by a distribution of high energy particles up to saturation is investigated with a guiding center code, using numerical eigenfunctions produced by linear theory and a numerical high energy particle distribution, in order to make detailed comparison with experiment and with models for saturation amplitudes and the modification of beam profiles. Two innovations are introduced. First, a very noise free means of obtaining the mode-particle energy and momentum transfer is introduced, and secondly, a spline representation of the actual beam particle distribution is used.

  7. Saturation of Alfvén modes in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Roscoe; Gorelenkov, Nikolai; Gorelenkova, Marina

    Here, the growth of Alfvén modes driven unstable by a distribution of high energy particles up to saturation is investigated with a guiding center code, using numerical eigenfunctions produced by linear theory and a numerical high energy particle distribution, in order to make detailed comparison with experiment and with models for saturation amplitudes and the modification of beam profiles. Two innovations are introduced. First, a very noise free means of obtaining the mode-particle energy and momentum transfer is introduced, and secondly, a spline representation of the actual beam particle distribution is used.

  8. Nonlinear dynamics of toroidal Alfvén eigenmodes in presence of tearing modes

    NASA Astrophysics Data System (ADS)

    Zhu, Jia; Ma, Zhiwei; Wang, Sheng; Zhang, Wei

    2016-10-01

    A new hybrid kinetic-MHD code CLT-K is developed to study nonlinear dynamics of n =1 toroidal Alfvén eigenmodes (TAEs) with the m/n =2/1 tearing mode. It is found that the n =1 TAE is first excited by isotropic energetic particles in the earlier stage and reaches the steady state due to wave-particle interaction. After the saturation of the n =1 TAE, the tearing mode intervenes and triggers the second growth of the mode. The modes goes into the second steady state due to multiple tearing mode-mode nonlinear coupling. Both wave-particle and wave-wave interactions are observed in our hybrid simulation.

  9. Representation of particle motion in the auditory midbrain of a developing anuran.

    PubMed

    Simmons, Andrea Megela

    2015-07-01

    In bullfrog tadpoles, a "deaf period" of lessened responsiveness to the pressure component of sounds, evident during the end of the late larval period, has been identified in the auditory midbrain. But coding of underwater particle motion in the vestibular medulla remains stable over all of larval development, with no evidence of a "deaf period." Neural coding of particle motion in the auditory midbrain was assessed to determine if a "deaf period" for this mode of stimulation exists in this brain area in spite of its absence from the vestibular medulla. Recording sites throughout the developing laminar and medial principal nuclei show relatively stable thresholds to z-axis particle motion, up until the "deaf period." Thresholds then begin to increase from this point up through the rest of metamorphic climax, and significantly fewer responsive sites can be located. The representation of particle motion in the auditory midbrain is less robust during later compared to earlier larval stages, overlapping with but also extending beyond the restricted "deaf period" for pressure stimulation. The decreased functional representation of particle motion in the auditory midbrain throughout metamorphic climax may reflect ongoing neural reorganization required to mediate the transition from underwater to amphibious life.

  10. High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark

    2018-06-01

    To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.

  11. Pairwise Interaction Extended Point-Particle (PIEP) model for multiphase jets and sedimenting particles

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Balachandar, S.

    2017-11-01

    We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.

  12. Porting LAMMPS to GPUs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, William Michael; Plimpton, Steven James; Wang, Peng

    2010-03-01

    LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.

  13. Modeling of neutral entrainment in an FRC thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brackbill, Jeremiah; Gimelshein, Natalia; Gimelshein, Sergey

    2012-11-27

    Neutral entrainment in a field reversed configuration thruster is modeled numerically with an implicit PIC code extended to include thermal and chemical interactions between plasma and neutral particles. The contribution of charge exchange and electron impact ionization reactions is analyzed, and the sensitivity of the entrainment efficiency to the plasmoid translation velocity and neutral density is evaluated.

  14. Comprehensive Model of Single Particle Pulverized Coal Combustion Extended to Oxy-Coal Conditions

    DOE PAGES

    Holland, Troy; Fletcher, Thomas H.

    2017-02-22

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. But, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. Our work extends a comprehensive char conversion code (CCK), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work several submodels inmore » the CCK code were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include the annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the chemical percolation devolatilization (CPD) model. We compare our results of the char combustion model to oxy-coal data, and further compared to parallel data sets near conventional conditions. A potential method to apply the detailed code in CFD work is given.« less

  15. Comprehensive Model of Single Particle Pulverized Coal Combustion Extended to Oxy-Coal Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Fletcher, Thomas H.

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. But, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. Our work extends a comprehensive char conversion code (CCK), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work several submodels inmore » the CCK code were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include the annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the chemical percolation devolatilization (CPD) model. We compare our results of the char combustion model to oxy-coal data, and further compared to parallel data sets near conventional conditions. A potential method to apply the detailed code in CFD work is given.« less

  16. Non-Maxwellian fast particle effects in gyrokinetic GENE simulations

    NASA Astrophysics Data System (ADS)

    Di Siena, A.; Görler, T.; Doerk, H.; Bilato, R.; Citrin, J.; Johnson, T.; Schneider, M.; Poli, E.; JET Contributors

    2018-04-01

    Fast ions have recently been found to significantly impact and partially suppress plasma turbulence both in experimental and numerical studies in a number of scenarios. Understanding the underlying physics and identifying the range of their beneficial effect is an essential task for future fusion reactors, where highly energetic ions are generated through fusion reactions and external heating schemes. However, in many of the gyrokinetic codes fast ions are, for simplicity, treated as equivalent-Maxwellian-distributed particle species, although it is well known that to rigorously model highly non-thermalised particles, a non-Maxwellian background distribution function is needed. To study the impact of this assumption, the gyrokinetic code GENE has recently been extended to support arbitrary background distribution functions which might be either analytical, e.g., slowing down and bi-Maxwellian, or obtained from numerical fast ion models. A particular JET plasma with strong fast-ion related turbulence suppression is revised with these new code capabilities both with linear and nonlinear gyrokinetic simulations. It appears that the fast ion stabilization tends to be less strong but still substantial with more realistic distributions, and this improves the quantitative power balance agreement with experiments.

  17. Solar wind interaction with Venus and Mars in a parallel hybrid code

    NASA Astrophysics Data System (ADS)

    Jarvinen, Riku; Sandroos, Arto

    2013-04-01

    We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.

  18. Computed secondary-particle energy spectra following nonelastic neutron interactions with sup 12 C for E sub n between 15 and 60 MeV: Comparisons of results from two calculational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.

  19. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  20. Nonlinear verification of a linear critical gradient model for energetic particle transport by Alfven eigenmodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, Eric M.; Waltz, R. E.

    Here, a “stiff transport” critical gradient model of energetic particle (EP) transport by EPdriven Alfven eigenmodes (AEs) is verified against local nonlinear gyrokinetic simulations of a well-studied beam-heated DIII-D discharge 146102. A greatly simplifying linear “recipe” for the limiting EP-density gradient (critical gradient) is considered here. In this recipe, the critical gradient occurs when the AE linear growth rate, driven mainly by the EP gradient, exceeds the ion temperature gradient (ITG) or trapped electron mode (TEM) growth rate, driven by the thermal plasma gradient, at the same toroidal mode number (n) as the AE peak growth, well below the ITG/TEMmore » peak n. This linear recipe for the critical gradient is validated against the critical gradient determined from far more expensive local nonlinear simulations in the gyrokinetic code GYRO, as identified by the point of transport runaway when all driving gradients are held fixed. The reduced linear model is extended to include the stabilization from equilibrium E×B velocity shear. The nonlinear verification unambiguously endorses one of two alternative recipes proposed in Ref. 1: the EP-driven AE growth rate should be determined with rather than without added thermal plasma drive.« less

  1. Nonlinear verification of a linear critical gradient model for energetic particle transport by Alfven eigenmodes

    DOE PAGES

    Bass, Eric M.; Waltz, R. E.

    2017-12-08

    Here, a “stiff transport” critical gradient model of energetic particle (EP) transport by EPdriven Alfven eigenmodes (AEs) is verified against local nonlinear gyrokinetic simulations of a well-studied beam-heated DIII-D discharge 146102. A greatly simplifying linear “recipe” for the limiting EP-density gradient (critical gradient) is considered here. In this recipe, the critical gradient occurs when the AE linear growth rate, driven mainly by the EP gradient, exceeds the ion temperature gradient (ITG) or trapped electron mode (TEM) growth rate, driven by the thermal plasma gradient, at the same toroidal mode number (n) as the AE peak growth, well below the ITG/TEMmore » peak n. This linear recipe for the critical gradient is validated against the critical gradient determined from far more expensive local nonlinear simulations in the gyrokinetic code GYRO, as identified by the point of transport runaway when all driving gradients are held fixed. The reduced linear model is extended to include the stabilization from equilibrium E×B velocity shear. The nonlinear verification unambiguously endorses one of two alternative recipes proposed in Ref. 1: the EP-driven AE growth rate should be determined with rather than without added thermal plasma drive.« less

  2. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    NASA Astrophysics Data System (ADS)

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  3. Differential Cross Section Kinematics for 3-dimensional Transport Codes

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Dick, Frank

    2008-01-01

    In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.

  4. Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A

    2005-01-01

    The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.

  5. Air-to-Air Missile Vector Scoring

    DTIC Science & Technology

    2012-03-22

    SIR sampling-importance resampling . . . . . . . . . . . . . . 53 EPF extended particle filter . . . . . . . . . . . . . . . . . . . . 54 UPF unscented...particle filter ( EPF ) or a unscented particle fil- ter (UPF) [20]. The basic concept is to apply a bank of N EKF or UKF filters to move particles from...Merwe, Doucet, Freitas and Wan provide a comprehensive discussion on the EPF and UPF, including algorithms for implementation [20]. 2Result based on

  6. Gas-particle partitioning of alcohol vapors on organic aerosols.

    PubMed

    Chan, Lap P; Lee, Alex K Y; Chan, Chak K

    2010-01-01

    Single particle levitation using an electrodynamic balance (EDB) has been found to give accurate and direct hygroscopic measurements (gas-particle partitioning of water) for a number of inorganic and organic aerosol systems. In this paper, we extend the use of an EDB to examine the gas-particle partitioning of volatile to semivolatile alcohols, including methanol, n-butanol, n-octanol, and n-decanol, on levitated oleic acid particles. The measured K(p) agreed with Pankow's absorptive partitioning model. At high n-butanol vapor concentrations (10(3) ppm), the uptake of n-butanol reduced the average molecular-weight of the oleic acid particle appreciably and hence increased the K(p) according to Pankow's equation. Moreover, the hygroscopicity of mixed oleic acid/n-butanol particles was higher than the predictions given by the UNIFAC model (molecular group contribution method) and the ZSR equation (additive rule), presumably due to molecular interactions between the chemical species in the mixed particles. Despite the high vapor concentrations used, these findings warrant further research on the partitioning of atmospheric organic vapors (K(p)) near sources and how collectively they affect the hygroscopic properties of organic aerosols.

  7. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  8. The Splashback Radius of Halos from Particle Dynamics. I. The SPARTA Algorithm

    NASA Astrophysics Data System (ADS)

    Diemer, Benedikt

    2017-07-01

    Motivated by the recent proposal of the splashback radius as a physical boundary of dark-matter halos, we present a parallel computer code for Subhalo and PARticle Trajectory Analysis (SPARTA). The code analyzes the orbits of all simulation particles in all host halos, billions of orbits in the case of typical cosmological N-body simulations. Within this general framework, we develop an algorithm that accurately extracts the location of the first apocenter of particles after infall into a halo, or splashback. We define the splashback radius of a halo as the smoothed average of the apocenter radii of individual particles. This definition allows us to reliably measure the splashback radii of 95% of host halos above a resolution limit of 1000 particles. We show that, on average, the splashback radius and mass are converged to better than 5% accuracy with respect to mass resolution, snapshot spacing, and all free parameters of the method.

  9. Coupling MHD and PIC models in 2 dimensions

    NASA Astrophysics Data System (ADS)

    Daldorff, L.; Toth, G.; Sokolov, I.; Gombosi, T. I.; Lapenta, G.; Brackbill, J. U.; Markidis, S.; Amaya, J.

    2013-12-01

    Even for extended fluid plasma models, like Hall, anisotropic ion pressure and multi fluid MHD, there are still many plasma phenomena that are not well captured. For this reason, we have coupled the Implicit Particle-In-Cell (iPIC3D) code with the BATSRUS global MHD code. The PIC solver is applied in a part of the computational domain, for example, in the vicinity of reconnection sites, and overwrites the MHD solution. On the other hand, the fluid solver provides the boundary conditions for the PIC code. To demonstrate the use of the coupled codes for magnetospheric applications, we perform a 2D magnetosphere simulation, where BATSRUS solves for Hall MHD in the whole domain except for the tail reconnection region, which is handled by iPIC3D.

  10. Alternate operating scenarios for NDCX-II

    NASA Astrophysics Data System (ADS)

    Sharp, W. M.; Friedman, A.; Grote, D. P.; Cohen, R. H.; Lund, S. M.; Vay, J.-L.; Waldron, W. L.

    2014-01-01

    NDCX-II is a newly completed accelerator facility at LBNL, built to study ion-heated warm dense matter, as well as aspects of ion-driven targets and intense-beam dynamics for inertial-fusion energy. The baseline design calls for using 12 induction cells to accelerate 30-50 nC of Li+ ions to 1.2 MeV. During commissioning, though, we plan to extend the source lifetime by extracting less total charge. Over time, we expect that NDCX-II will be upgraded to substantially higher energies, necessitating the use of heavier ions to keep a suitable deposition range in targets. For operational flexibility, the option of using a helium plasma source is also being investigated. Each of these options requires development of an alternate acceleration schedule. The schedules here are worked out with a fast-running 1-D particle-in-cell code ASP.

  11. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  12. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  13. Examination of Airborne FDEM System Attributes for UXO Mapping and Detection

    DTIC Science & Technology

    2009-11-01

    quadrature output should only occur when there is a distortion in the transmitter waveform signal that correlates with the quadrature part of the...suggested that the S/N performance of the quadrature output of the two FDEM designs would be similar to the observed S/N of TEM systems, though...the semi-airborne configuration. We propose to extend the current SAIC codes to address this need, and to perform additional modeling using codes

  14. Energetic particle modes of q = 1 high-order harmonics in tokamak plasmas with monotonic weak magnetic shear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Zhen-Zhen; Wang, Feng; Fu, G. Y.

    Linear and nonlinear simulations of high-order harmonics q=1 energetic particle modes excited by trapped energetic particles in tokamaks are carried out using kinetic/magnetohydrodynamic hybrid code M3D-K. It is found that with a flat safety factor profile in the core region, the linear growth rate of high-order harmonics (m=n>1) driven by energetic trapped particles can be higher than the m/n=1/1 component. The high m=n>1 modes become more unstable when the pressure of energetic particles becomes higher. Moreover, it is shown that there exist multiple resonant locations satisfying different resonant conditions in the phase space of energetic particles for the high-order harmonicsmore » modes, whereas there is only one precessional resonance for the m/n=1/1 harmonics. The fluid nonlinearity reduces the saturation level of the n=1 component, while it hardly affects those of the high n components, especially the modes with m=n=3,4. The frequency of these modes does not chirp significantly, which is different with the typical fishbone driven by trapped particles. Lastly, in addition, the flattening region of energetic particle distribution due to high-order harmonics excitation is wider than that due to m/n=1/1 component, although the m/n=1/1 component has a higher saturation amplitude.« less

  15. Energetic particle modes of q = 1 high-order harmonics in tokamak plasmas with monotonic weak magnetic shear

    DOE PAGES

    Ren, Zhen-Zhen; Wang, Feng; Fu, G. Y.; ...

    2017-04-24

    Linear and nonlinear simulations of high-order harmonics q=1 energetic particle modes excited by trapped energetic particles in tokamaks are carried out using kinetic/magnetohydrodynamic hybrid code M3D-K. It is found that with a flat safety factor profile in the core region, the linear growth rate of high-order harmonics (m=n>1) driven by energetic trapped particles can be higher than the m/n=1/1 component. The high m=n>1 modes become more unstable when the pressure of energetic particles becomes higher. Moreover, it is shown that there exist multiple resonant locations satisfying different resonant conditions in the phase space of energetic particles for the high-order harmonicsmore » modes, whereas there is only one precessional resonance for the m/n=1/1 harmonics. The fluid nonlinearity reduces the saturation level of the n=1 component, while it hardly affects those of the high n components, especially the modes with m=n=3,4. The frequency of these modes does not chirp significantly, which is different with the typical fishbone driven by trapped particles. Lastly, in addition, the flattening region of energetic particle distribution due to high-order harmonics excitation is wider than that due to m/n=1/1 component, although the m/n=1/1 component has a higher saturation amplitude.« less

  16. The PARTRAC code: Status and recent developments

    NASA Astrophysics Data System (ADS)

    Friedland, Werner; Kundrat, Pavel

    Biophysical modeling is of particular value for predictions of radiation effects due to manned space missions. PARTRAC is an established tool for Monte Carlo-based simulations of radiation track structures, damage induction in cellular DNA and its repair [1]. Dedicated modules describe interactions of ionizing particles with the traversed medium, the production and reactions of reactive species, and score DNA damage determined by overlapping track structures with multi-scale chromatin models. The DNA repair module describes the repair of DNA double-strand breaks (DSB) via the non-homologous end-joining pathway; the code explicitly simulates the spatial mobility of individual DNA ends in parallel with their processing by major repair enzymes [2]. To simulate the yields and kinetics of radiation-induced chromosome aberrations, the repair module has been extended by tracking the information on the chromosome origin of ligated fragments as well as the presence of centromeres [3]. PARTRAC calculations have been benchmarked against experimental data on various biological endpoints induced by photon and ion irradiation. The calculated DNA fragment distributions after photon and ion irradiation reproduce corresponding experimental data and their dose- and LET-dependence. However, in particular for high-LET radiation many short DNA fragments are predicted below the detection limits of the measurements, so that the experiments significantly underestimate DSB yields by high-LET radiation [4]. The DNA repair module correctly describes the LET-dependent repair kinetics after (60) Co gamma-rays and different N-ion radiation qualities [2]. First calculations on the induction of chromosome aberrations have overestimated the absolute yields of dicentrics, but correctly reproduced their relative dose-dependence and the difference between gamma- and alpha particle irradiation [3]. Recent developments of the PARTRAC code include a model of hetero- vs euchromatin structures to enable accounting for variations in DNA damage yields, complexity and repair between these regions. Second, the applicability of the code to low-energy ions has been extended to full stopping by using a modified Barkas scaling of proton cross sections for ions heavier than helium. Third, ongoing studies aim at hitherto unprecedented benchmarking of the code against experiments with sub-muµm focused bunches of low-LET ions mimicking single high-LET ion tracks [5] which separate effects of damage clustering on a sub-mum scale from DNA damage complexity on a nanometer scale. Fourth, motivated by implications for the involvement of mitochondria in intercellular signaling and radiation-induced bystander effects, ongoing work extends the range of PARTRAC DNA models to radiation effects on mitochondrial DNA. The contribution will discuss the PARTRAC modules, benchmarks to experimental data, recent and ongoing developments of the code, with special attention to its implications and potential applications in radiation protection and space research. Acknowledgement. This work was partially funded by the EU (Contract FP7-249689 ‘DoReMi’). References 1. Friedland et al., Mutat. Res. 711, 28 (2011) 2. Friedland et al., Int. J. Radiat. Biol. 88, 129 (2012) 3. Friedland et al., Mutat. Res. 756, 213 (2013) 4. Alloni et al., Radiat. Res. 179, 690 (2013) 5. Schmid et al., Phys. Med. Biol. 57, 5889 (2012)

  17. Light scattering by planetary-regolith analog samples: computational results

    NASA Astrophysics Data System (ADS)

    Väisänen, Timo; Markkanen, Johannes; Hadamcik, Edith; Levasseur-Regourd, Anny-Chantal; Lasue, Jeremie; Blum, Jürgen; Penttilä, Antti; Muinonen, Karri

    2017-04-01

    We compute light scattering by a planetary-regolith analog surface. The corresponding experimental work is from Hadamcik et al. [1] with the PROGRA2-surf [2] device measuring the polarization of dust particles. The analog samples are low density (volume fraction 0.15 ± 0.03) agglomerates produced by random ballistic deposition of almost equisized silica spheres (refractive index n=1.5 and diameter 1.45 ± 0.06 µm). Computations are carried out with the recently developed codes entitled Radiative Transfer with Reciprocal Transactions (R2T2) and Radiative Transfer Coherent Backscattering with incoherent interactions (RT-CB-ic). Both codes incorporate the so-called incoherent treatment which enhances the applicability of the radiative transfer as shown by Muinonen et al. [3]. As a preliminary result, we have computed scattering from a large spherical medium with the RT-CB-ic using equal-sized particles with diameters of 1.45 microns. The preliminary results have shown that the qualitative characteristics are similar for the computed and measured intensity and polarization curves but that there are still deviations between the characteristics. We plan to remove the deviations by incorporating a size distribution of particles (1.45 ± 0.02 microns) and detailed information about the volume density profile within the analog surface. Acknowledgments: We acknowledge the ERC Advanced Grant no. 320773 entitled Scattering and Absorption of Electromagnetic Waves in Particulate Media (SAEMPL). Computational resources were provided by CSC - IT Centre for Science Ltd, Finland. References: [1] Hadamcik E. et al. (2007), JQSRT, 106, 74-89 [2] Levasseur-Regourd A.C. et al. (2015), Polarimetry of stars and planetary systems, CUP, 61-80 [3] Muinonen K. et al. (2016), extended abstract for EMTS.

  18. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  19. Hybrid petacomputing meets cosmology: The Roadrunner Universe project

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Lukić, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James

    2009-07-01

    The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.

  20. One-dimensional energetic particle quasilinear diffusion for realistic TAE instabilities

    NASA Astrophysics Data System (ADS)

    Duarte, Vinicius; Ghantous, Katy; Berk, Herbert; Gorelenkov, Nikolai

    2014-10-01

    Owing to the proximity of the characteristic phase (Alfvén) velocity and typical energetic particle (EP) superthermal velocities, toroidicity-induced Alfvén eigenmodes (TAEs) can be resonantly destabilized endangering the plasma performance. Thus, it is of ultimate importance to understand the deleterious effects on the confinement resulting from fast ion driven instabilities expected in fusion-grade plasmas. We propose to study the interaction of EPs and TAEs using a line broadened quasilinear model, which captures the interaction in both regimes of isolated and overlapping modes. The resonance particles diffuse in the phase space where the problem essentially reduces to one dimension with constant kinetic energy and the diffusion mainly along the canonical toroidal angular momentum. Mode structure and wave particle resonances are computed by the NOVA code and are used in a quasilinear diffusion code that is being written to study the evolution of the distribution function, under the assumption that they can be considered virtually unalterable during the diffusion. A new scheme for the resonant particle diffusion is being proposed that builds on the 1-D nature of the diffusion from a single mode, which leads to a momentum conserving difference scheme even when there is mode overlap.

  1. MHD Code Optimizations and Jets in Dense Gaseous Halos

    NASA Astrophysics Data System (ADS)

    Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max

    We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.

  2. Exploring potential Pluto-generated neutral tori

    NASA Astrophysics Data System (ADS)

    Smith, Howard T.; Hill, Matthew; KollMann, Peter; McHutt, Ralph

    2015-11-01

    The NASA New Horizons mission to Pluto is providing unprecedented insight into this mysterious outer solar system body. Escaping molecular nitrogen is of particular interest and possibly analogous to similar features observed at moons of Saturn and Jupiter. Such escaping N2 has the potential of creating molecular nitrogen and N (as a result of molecular dissociation) tori or partial toroidal extended particle distributions. The presence of these features would present the first confirmation of an extended toroidal neutral feature on a planetary scale in our solar system. While escape velocities are anticipated to be lower than those at Enceladus, Io or even Europa, particle lifetimes are much longer in Pluto’s orbit because as a result of much weaker solar interaction processes along Pluto’s orbit (on the order of tens of years). Thus, with a ~248 year orbit, Pluto may in fact be generating an extended toroidal feature along it orbit.For this work, we modify and apply our 3-D Monte Carlo neutral torus model (previously used at Saturn, Jupiter and Mercury) to study/analyze the theoretical possibility and scope of potential Pluto-generated neutral tori. Our model injects weighted particles and tracks their trajectories under the influence of all gravitational fields with interactions with other particles, solar photons and Pluto collisions. We present anticipated N2 and N tori based on current estimates of source characterization and environmental conditions. We also present an analysis of sensitivity to assumed initial conditions. Such results can provide insight into the Pluto system as well as valuable interpretation of New Horizon’s observational data.

  3. An extended Reed Solomon decoder design

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.; Purviance, J.

    1991-01-01

    It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.

  4. PoMiN: A Post-Minkowskian N-body Solver

    NASA Astrophysics Data System (ADS)

    Feng, Justin; Baumann, Mark; Hall, Bryton; Doss, Joel; Spencer, Lucas; Matzner, Richard

    2018-06-01

    In this paper, we introduce PoMiN, a lightweight N-body code based on the post-Minkowskian N-body Hamiltonian of Ledvinka et al., which includes general relativistic effects up to first order in Newton’s constant G, and all orders in the speed of light c. PoMiN is written in C and uses a fourth-order Runge–Kutta integration scheme. PoMiN has also been written to handle an arbitrary number of particles (both massive and massless), with a computational complexity that scales as O(N 2). We describe the methods we used to simplify and organize the Hamiltonian, and the tests we performed (convergence, conservation, and analytical comparison tests) to validate the code.

  5. Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.

    PubMed

    Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle

    2014-11-01

    To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.

  6. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  7. SEURAT: SPH scheme extended with ultraviolet line radiative transfer

    NASA Astrophysics Data System (ADS)

    Abe, Makito; Suzuki, Hiroyuki; Hasegawa, Kenji; Semelin, Benoit; Yajima, Hidenobu; Umemura, Masayuki

    2018-05-01

    We present a novel Lyman alpha (Ly α) radiative transfer code, SEURAT (SPH scheme Extended with Ultraviolet line RAdiative Transfer), where line scatterings are solved adaptively with the resolution of the smoothed particle hydrodynamics (SPH). The radiative transfer method implemented in SEURAT is based on a Monte Carlo algorithm in which the scattering and absorption by dust are also incorporated. We perform standard test calculations to verify the validity of the code; (i) emergent spectra from a static uniform sphere, (ii) emergent spectra from an expanding uniform sphere, and (iii) escape fraction from a dusty slab. Thereby, we demonstrate that our code solves the {Ly} α radiative transfer with sufficient accuracy. We emphasize that SEURAT can treat the transfer of {Ly} α photons even in highly complex systems that have significantly inhomogeneous density fields. The high adaptivity of SEURAT is desirable to solve the propagation of {Ly} α photons in the interstellar medium of young star-forming galaxies like {Ly} α emitters (LAEs). Thus, SEURAT provides a powerful tool to model the emergent spectra of {Ly} α emission, which can be compared to the observations of LAEs.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceglio, N.M.; George, E.V.; Brooks, K.M.

    The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less

  9. An N-body Integrator for Planetary Rings

    NASA Astrophysics Data System (ADS)

    Hahn, Joseph M.

    2011-04-01

    A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.

  10. Smooth particle hydrodynamics: theory and application to the origin of the moon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benz, W.

    1986-01-01

    The origin of the moon is modeled by the so-called smooth particle hydrodynamics (SPH) method (Lucy, 1977, Monaghan 1985) which substitutes to the fluid a finite set of extended particles, the hydrodynamics equations reduce to the equation of motion of individual particles. These equations of motion differ only from the standard gravitational N-body problem insofar that pressure gradients and viscosity terms have to be added to the gradient of the potential to derive the forces between the particles. The numerical tools developed for ''classical'' N-body problems can therefore be readily applied to solve 3 dimensional hydroynamical problems. 12 refs., 1more » fig.« less

  11. Fission time scale from pre-scission neutron and α multiplicities in the 16O + 194Pt reaction

    NASA Astrophysics Data System (ADS)

    Kapoor, K.; Verma, S.; Sharma, P.; Mahajan, R.; Kaur, N.; Kaur, G.; Behera, B. R.; Singh, K. P.; Kumar, A.; Singh, H.; Dubey, R.; Saneesh, N.; Jhingan, A.; Sugathan, P.; Mohanto, G.; Nayak, B. K.; Saxena, A.; Sharma, H. P.; Chamoli, S. K.; Mukul, I.; Singh, V.

    2017-11-01

    Pre- and post-scission α -particle multiplicities have been measured for the reaction 16O+P194t at 98.4 MeV forming R210n compound nucleus. α particles were measured at various angles in coincidence with the fission fragments. Moving source technique was used to extract the pre- and post-scission contributions to the particle multiplicity. Study of the fission mechanism using the different probes are helpful in understanding the detailed reaction dynamics. The neutron multiplicities for this reaction have been reported earlier. The multiplicities of neutrons and α particles were reproduced using standard statistical model code joanne2 by varying the transient (τt r) and saddle to scission (τs s c) times. This code includes deformation dependent-particle transmission coefficients, binding energies and level densities. Fission time scales of the order of 50-65 ×10-21 s are required to reproduce the neutron and α -particle multiplicities.

  12. Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM

    NASA Astrophysics Data System (ADS)

    de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl

    2002-03-01

    We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.

  13. Revised Extended Grid Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, Roger L.

    The Revised Eolus Grid Library (REGL) is a mesh-tracking library that was developed for use with the MCNP6TM computer code so that (radiation) particles can track on an unstructured mesh. The unstructured mesh is a finite element representation of any geometric solid model created with a state-of-the-art CAE/CAD tool. The mesh-tracking library is written using modern Fortran and programming standards; the library is Fortran 2003 compliant. The library was created with a defined application programmer interface (API) so that it could easily integrate with other particle tracking/transport codes. The library does not handle parallel processing via the message passing interfacemore » (mpi), but has been used successfully where the host code handles the mpi calls. The library is thread-safe and supports the OpenMP paradigm. As a library, all features are available through the API and overall a tight coupling between it and the host code is required. Features of the library are summarized with the following list: Can accommodate first and second order 4, 5, and 6-sided polyhedra; any combination of element types may appear in a single geometry model; parts may not contain tetrahedra mixed with other element types; pentahedra and hexahedra can be together in the same part; robust handling of overlaps and gaps; tracks element-to-element to produce path length results at the element level; finds element numbers for a given mesh location; finds intersection points on element faces for the particle tracks; produce a data file for post processing results analysis; reads Abaqus .inp input (ASCII) files to obtain information for the global mesh-model; supports parallel input processing via mpi; and support parallel particle transport by both mpi and OpenMP.« less

  14. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  15. Dynamic ELM and divertor control using resonant toroidal multi-mode magnetic fields in DIII-D and EAST

    NASA Astrophysics Data System (ADS)

    Sun, Youwen

    2017-10-01

    A rotating n = 2 Resonant Magnetic Perturbation (RMP) field combined with a stationary n = 3 RMP field has validated predictions that access to ELM suppression can be improved, while divertor heat and particle flux can also be dynamically controlled in DIII-D. Recent observations in the EAST tokamak indicate that edge magnetic topology changes, due to nonlinear plasma response to magnetic perturbations, play a critical role in accessing ELM suppression. MARS-F code MHD simulations, which include the plasma response to the RMP, indicate the nonlinear transition to ELM suppression is optimized by configuring the RMP coils to drive maximal edge stochasticity. Consequently, mixed toroidal multi-mode RMP fields, which produce more densely packed islands over a range of additional rational surfaces, improve access to ELM suppression, and further spread heat loading on the divertor. Beneficial effects of this multi-harmonic spectrum on ELM suppression have been validated in DIII-D. Here, the threshold current required for ELM suppression with a mixed n spectrum, where part of the n = 3 RMP field is replaced by an n = 2 field, is smaller than the case with pure n = 3 field. An important further benefit of this multi-mode approach is that significant changes of 3D particle flux footprint profiles on the divertor are found in the experiment during the application of a rotating n = 2 RMP field superimposed on a static n = 3 RMP field. This result was predicted by modeling studies of the edge magnetic field structure using the TOP2D code which takes into account plasma response from MARS-F code. These results expand physics understanding and potential effectiveness of the technique for reliably controlling ELMs and divertor power/particle loading distributions in future burning plasma devices such as ITER. Work supported by USDOE under DE-FC02-04ER54698 and NNSF of China under 11475224.

  16. Extended-release niacin treatment of the atherogenic lipid profile and lipoprotein(a) in diabetes.

    PubMed

    Pan, Jianqiu; Van, Joanne T; Chan, Eve; Kesala, Renata L; Lin, Michael; Charles, M Arthur

    2002-09-01

    We tested the hypotheses that extended-release niacin is effective for the separate treatments of abnormalities in low-density liprotein (LDL) size, high-density lipoprotein (HDL)-2, and lipoprotein(a) [Lp(a)] without potential negative effects on glycated hemoglobin levels. The lipids that constitute the atherogenic lipid profile (ALP), such as triglycerides, small, dense LDL-cholesterol particle concentration, LDL particle size, total HDL-cholesterol (HDLc), HDL-2, and HDL-2 cholesterol concentration, as well as total LDL-cholesterol (LDLc) and Lp(a), were measured in 36 diabetic patients with primary abnormalities of LDL particle size (n = 25), HDL-2 (n = 23), and/or Lp(a) (n = 12) before and after extended-release niacin treatment. LDL particle size and HDL-2 were measured using polyacrylamide gradient gel electrophoreses and Lp(a) was measured by enzyme-linked immunosorbent assay (ELISA). After extended-release niacin, LDL peak particle diameter increased from 25.2 +/- 0.6 nm to 26.1 +/- 0.7 nm (P <.0001); small, dense LDLc concentration decreased from 30 +/- 17 mg/dL to 17 +/- 10 mg/dL (P <.0001); total HDLc increased from 42 +/- 9 mg/dL to 57 +/- 16 mg/dL (P <.0001); HDL-2 as the percent of total HDLc mass increased from 34% +/- 10% to 51% +/- 17% (P <.0001); and Lp(a) decreased from 37 +/- 10 mg/dL to 23 +/- 10 mg/dL (P <.001). Mean hemoglobin A(1c) level was improved during treatment from 7.5% +/- 1.6% to 6.5% +/- 0.9% (P <.0001). A subset of patients who had no change in hemoglobin A(1c) levels before and after treatment (6.8% +/- 1% v 6.7% +/- 1%; not significant) showed identical lipid changes. Twenty-two percent of patients were unable to tolerate extended-release niacin due to reversible side effects. These data indicate that in diabetic patients, extended-release niacin (1) is effective for separately treating diabetic dyslipidemias associated with abnormal LDL size, HDL-2, and Lp(a) independently of glycated hemoglobin levels; (2) must be used with modern and aggressive oral hypoglycemic agents or insulin treatment; and (3) is a major drug for the treatment of diabetic dyslipidemias because of its broad spectrum of effectiveness for the ALP and Lp(a). Copyright 2002, Elsevier Science (USA). All rights reserved.

  17. LIDT-DD: A new self-consistent debris disc model that includes radiation pressure and couples dynamical and collisional evolution

    NASA Astrophysics Data System (ADS)

    Kral, Q.; Thébault, P.; Charnoz, S.

    2013-10-01

    Context. In most current debris disc models, the dynamical and the collisional evolutions are studied separately with N-body and statistical codes, respectively, because of stringent computational constraints. In particular, incorporating collisional effects (especially destructive collisions) into an N-body scheme has proven a very arduous task because of the exponential increase of particles it would imply. Aims: We present here LIDT-DD, the first code able to mix both approaches in a fully self-consistent way. Our aim is for it to be generic enough to be applied to any astrophysical case where we expect dynamics and collisions to be deeply interlocked with one another: planets in discs, violent massive breakups, destabilized planetesimal belts, bright exozodiacal discs, etc. Methods: The code takes its basic architecture from the LIDT3D algorithm for protoplanetary discs, but has been strongly modified and updated to handle the very constraining specificities of debris disc physics: high-velocity fragmenting collisions, radiation-pressure affected orbits, absence of gas that never relaxes initial conditions, etc. It has a 3D Lagrangian-Eulerian structure, where grains of a given size at a given location in a disc are grouped into super-particles or tracers whose orbits are evolved with an N-body code and whose mutual collisions are individually tracked and treated using a particle-in-a-box prescription designed to handle fragmenting impacts. To cope with the wide range of possible dynamics for same-sized particles at any given location in the disc, and in order not to lose important dynamical information, tracers are sorted and regrouped into dynamical families depending on their orbits. A complex reassignment routine that searches for redundant tracers in each family and reassignes them where they are needed, prevents the number of tracers from diverging. Results: The LIDT-DD code has been successfully tested on simplified cases for which robust results have been obtained in past studies: we retrieve the classical features of particle size distributions in unperturbed discs and the outer radial density profiles in ~r-1.5 outside narrow collisionally active rings as well as the depletion of small grains in dynamically cold discs. The potential of the new code is illustrated with the test case of the violent breakup of a massive planetesimal within a debris disc. Preliminary results show that we are able for the first time to quantify the timescale over which the signature of such massive break-ups can be detected. In addition to studying such violent transient events, the main potential future applications of the code are planet and disc interactions, and more generally, any configurations where dynamics and collisions are expected to be intricately connected.

  18. Verification of gyrokinetic particle simulation of current-driven instability in fusion plasmas. I. Internal kink mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClenaghan, J.; Lin, Z.; Holod, I.

    The gyrokinetic toroidal code (GTC) capability has been extended for simulating internal kink instability with kinetic effects in toroidal geometry. The global simulation domain covers the magnetic axis, which is necessary for simulating current-driven instabilities. GTC simulation in the fluid limit of the kink modes in cylindrical geometry is verified by benchmarking with a magnetohydrodynamic eigenvalue code. Gyrokinetic simulations of the kink modes in the toroidal geometry find that ion kinetic effects significantly reduce the growth rate even when the banana orbit width is much smaller than the radial width of the perturbed current layer at the mode rational surface.

  19. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  20. Kinetic modeling of x-ray laser-driven solid Al plasmas via particle-in-cell simulation

    NASA Astrophysics Data System (ADS)

    Royle, R.; Sentoku, Y.; Mancini, R. C.; Paraschiv, I.; Johzaki, T.

    2017-06-01

    Solid-density plasmas driven by intense x-ray free-electron laser (XFEL) radiation are seeded by sources of nonthermal photoelectrons and Auger electrons that ionize and heat the target via collisions. Simulation codes that are commonly used to model such plasmas, such as collisional-radiative (CR) codes, typically assume a Maxwellian distribution and thus instantaneous thermalization of the source electrons. In this study, we present a detailed description and initial applications of a collisional particle-in-cell code, picls, that has been extended with a self-consistent radiation transport model and Monte Carlo models for photoionization and K L L Auger ionization, enabling the fully kinetic simulation of XFEL-driven plasmas. The code is used to simulate two experiments previously performed at the Linac Coherent Light Source investigating XFEL-driven solid-density Al plasmas. It is shown that picls-simulated pulse transmissions using the Ecker-Kröll continuum-lowering model agree much better with measurements than do simulations using the Stewart-Pyatt model. Good quantitative agreement is also found between the time-dependent picls results and those of analogous simulations by the CR code scfly, which was used in the analysis of the experiments to accurately reproduce the observed K α emissions and pulse transmissions. Finally, it is shown that the effects of the nonthermal electrons are negligible for the conditions of the particular experiments under investigation.

  1. GRMHD and GRPIC Simulations

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Mizuno, Y.; Watson, M.; Fuerst, S.; Wu, K.; Hardee, P.; Fishman, G. J.

    2007-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic (GRMHD) code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous code. The simulation results show the jet formations from a geometrically thin accretion disk near a nonrotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field configuration including issues for future research. A General Relativistic Particle-in-Cell Code (GRPIC) has been developed using the Kerr-Schild metric. The code includes kinetic effects, and is in accordance with GRMHD code. Since the gravitational force acting on particles is extreme near black holes, there are some difficulties in numerically describing these processes. The preliminary code consists of an accretion disk and free-falling corona. Results indicate that particles are ejected from the black hole. These results are consistent with other GRMHD simulations. The GRPIC simulation results will be presented, along with some remarks and future improvements. The emission is calculated from relativistic flows in black hole systems using a fully general relativistic radiative transfer formulation, with flow structures obtained by GRMHD simulations considering thermal free-free emission and thermal synchrotron emission. Bright filament-like features protrude (visually) from the accretion disk surface, which are enhancements of synchrotron emission where the magnetic field roughly aligns with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We would like to extend this research using GRPIC simulations and examine a possible new mechanism for certain X-ray quasi-periodic oscillations (QPOs) observed in blackhole X-ray binaries.

  2. Collisional dependence of Alfvén mode saturation in tokamaks

    DOE PAGES

    Zhou, Muni; White, Roscoe

    2016-10-26

    Saturation of Alfvén modes driven unstable by a distribution of high energy particles as a function of collisionality is investigated with a guiding center code, using numerical eigenfunctions produced by linear theory and numerical high energy particle distributions. The most important resonance is found and it is shown that when the resonance domain is bounded, not allowing particles to collisionlessly escape, the saturation amplitude is given by the balance of the resonance mixing time with the time for nearby particles to collisionally diffuse across the resonance width. Finally, saturation amplitudes are in agreement with theoretical predictions as long as themore » mode amplitude is not so large that it produces stochastic loss from the resonance domain.« less

  3. Collisional dependence of Alfvén mode saturation in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Muni; White, Roscoe

    Saturation of Alfvén modes driven unstable by a distribution of high energy particles as a function of collisionality is investigated with a guiding center code, using numerical eigenfunctions produced by linear theory and numerical high energy particle distributions. The most important resonance is found and it is shown that when the resonance domain is bounded, not allowing particles to collisionlessly escape, the saturation amplitude is given by the balance of the resonance mixing time with the time for nearby particles to collisionally diffuse across the resonance width. Finally, saturation amplitudes are in agreement with theoretical predictions as long as themore » mode amplitude is not so large that it produces stochastic loss from the resonance domain.« less

  4. Electron-beam-ion-source (EBIS) modeling progress at FAR-TECH, Inc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J. S., E-mail: kim@far-tech.com; Zhao, L., E-mail: kim@far-tech.com; Spencer, J. A., E-mail: kim@far-tech.com

    FAR-TECH, Inc. has been developing a numerical modeling tool for Electron-Beam-Ion-Sources (EBISs). The tool consists of two codes. One is the Particle-Beam-Gun-Simulation (PBGUNS) code to simulate a steady state electron beam and the other is the EBIS-Particle-In-Cell (EBIS-PIC) code to simulate ion charge breeding with the electron beam. PBGUNS, a 2D (r,z) electron gun and ion source simulation code, has been extended for efficient modeling of EBISs and the work was presented previously. EBIS-PIC is a space charge self-consistent PIC code and is written to simulate charge breeding in an axisymmetric 2D (r,z) device allowing for full three-dimensional ion dynamics.more » This 2D code has been successfully benchmarked with Test-EBIS measurements at Brookhaven National Laboratory. For long timescale (< tens of ms) ion charge breeding, the 2D EBIS-PIC simulations take a long computational time making the simulation less practical. Most of the EBIS charge breeding, however, may be modeled in 1D (r) as the axial dependence of the ion dynamics may be ignored in the trap. Where 1D approximations are valid, simulations of charge breeding in an EBIS over long time scales become possible, using EBIS-PIC together with PBGUNS. Initial 1D results are presented. The significance of the magnetic field to ion dynamics, ion cooling effects due to collisions with neutral gas, and the role of Coulomb collisions are presented.« less

  5. Microdosimetric evaluation of the neutron field for BNCT at Kyoto University reactor by using the PHITS code.

    PubMed

    Baba, H; Onizuka, Y; Nakao, M; Fukahori, M; Sato, T; Sakurai, Y; Tanaka, H; Endo, S

    2011-02-01

    In this study, microdosimetric energy distributions of secondary charged particles from the (10)B(n,α)(7)Li reaction in boron-neutron capture therapy (BNCT) field were calculated using the Particle and Heavy Ion Transport code System (PHITS). The PHITS simulation was performed to reproduce the geometrical set-up of an experiment that measured the microdosimetric energy distributions at the Kyoto University Reactor where two types of tissue-equivalent proportional counters were used, one with A-150 wall alone and another with a 50-ppm-boron-loaded A-150 wall. It was found that the PHITS code is a useful tool for the simulation of the energy deposited in tissue in BNCT based on the comparisons with experimental results.

  6. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  7. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    NASA Astrophysics Data System (ADS)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  8. Rate-compatible punctured convolutional codes (RCPC codes) and their applications

    NASA Astrophysics Data System (ADS)

    Hagenauer, Joachim

    1988-04-01

    The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.

  9. Dust Cloud Modeling and Propagation Effects for Radar and Communications Codes

    DTIC Science & Technology

    1978-11-01

    particles can be described by a power law probabi 1it Y d i st r i ut i on with a power exponent of 4. Four is a typical value for dust particlIs from...loose unconsolidated soils such as desert alluviun, blust ,eera ted from a nuclear cratering explosion in rock and cohes ive soil s haN pO,,e r exponent ...da p = power law exponent amin = minimum particle diameter in the distribution (cm) a = maximum particle diameter in the distribution (cm).max The log

  10. A Particle Module for the PLUTO Code. I. An Implementation of the MHD–PIC Equations

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Bodo, G.; Vaidya, B.; Mattia, G.

    2018-05-01

    We describe an implementation of a particle physics module available for the PLUTO code appropriate for the dynamical evolution of a plasma consisting of a thermal fluid and a nonthermal component represented by relativistic charged particles or cosmic rays (CRs). While the fluid is approached using standard numerical schemes for magnetohydrodynamics, CR particles are treated kinetically using conventional Particle-In-Cell (PIC) techniques. The module can be used either to describe test-particle motion in the fluid electromagnetic field or to solve the fully coupled magnetohydrodynamics (MHD)–PIC system of equations with particle backreaction on the fluid as originally introduced by Bai et al. Particle backreaction on the fluid is included in the form of momentum–energy feedback and by introducing the CR-induced Hall term in Ohm’s law. The hybrid MHD–PIC module can be employed to study CR kinetic effects on scales larger than the (ion) skin depth provided that the Larmor gyration scale is properly resolved. When applicable, this formulation avoids resolving microscopic scales, offering substantial computational savings with respect to PIC simulations. We present a fully conservative formulation that is second-order accurate in time and space, and extends to either the Runge–Kutta (RK) or the corner transport upwind time-stepping schemes (for the fluid), while a standard Boris integrator is employed for the particles. For highly energetic relativistic CRs and in order to overcome the time-step restriction, a novel subcycling strategy that retains second-order accuracy in time is presented. Numerical benchmarks and applications including Bell instability, diffusive shock acceleration, and test-particle acceleration in reconnecting layers are discussed.

  11. Clinical code set engineering for reusing EHR data for research: A review.

    PubMed

    Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels

    2017-06-01

    The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.

  13. Simulating the dynamics of complex plasmas.

    PubMed

    Schwabe, M; Graves, D B

    2013-08-01

    Complex plasmas are low-temperature plasmas that contain micrometer-size particles in addition to the neutral gas particles and the ions and electrons that make up the plasma. The microparticles interact strongly and display a wealth of collective effects. Here we report on linked numerical simulations that reproduce many of the experimental results of complex plasmas. We model a capacitively coupled plasma with a fluid code written for the commercial package comsol. The output of this model is used to calculate forces on microparticles. The microparticles are modeled using the molecular dynamics package lammps, which we extended to include the forces from the plasma. Using this method, we are able to reproduce void formation, the separation of particles of different sizes into layers, lane formation, vortex formation, and other effects.

  14. NASA Tech Briefs, October 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes.

  15. On the kinetics of transgranular particle embrittlement during simulated carburizing in steel containing grain-refining additions of aluminum and niobium plus aluminum

    DOE PAGES

    Leap, Michael Jerald

    2017-08-31

    Here, the kinetics of toughness degradation resulting from transgranular particle embrittlement are evaluated as a function of composition and processing history for simulated carburizing operations in air-melt steel containing grain-refining additions of aluminum and aluminum plus niobium. The kinetics of particle embrittlement are inherently linked to the ripening of AlN precipitates after extended austenitization in steel containing carbon contents representative of both the case and core of a carburized component. Embrittlement in steel containing AlN occurs with an activation energy similar to the value for aluminum diffusion in austenite, although an AlN volume fraction effect on the embrittlement kinetics ismore » manifested as decreases in activation energy with decreases in the [Al]/[N] ratio of steel. In contrast, the presence of niobium substantially retards the kinetics of particle embrittlement in steel containing 120–200 ppm N. Observations of AlN precipitates coated with Nb(C,N) indicate that the decreases in embrittlement kinetics are related to a reduction in the potential for AlN ripening during austenitization.« less

  16. On the kinetics of transgranular particle embrittlement during simulated carburizing in steel containing grain-refining additions of aluminum and niobium plus aluminum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leap, Michael Jerald

    Here, the kinetics of toughness degradation resulting from transgranular particle embrittlement are evaluated as a function of composition and processing history for simulated carburizing operations in air-melt steel containing grain-refining additions of aluminum and aluminum plus niobium. The kinetics of particle embrittlement are inherently linked to the ripening of AlN precipitates after extended austenitization in steel containing carbon contents representative of both the case and core of a carburized component. Embrittlement in steel containing AlN occurs with an activation energy similar to the value for aluminum diffusion in austenite, although an AlN volume fraction effect on the embrittlement kinetics ismore » manifested as decreases in activation energy with decreases in the [Al]/[N] ratio of steel. In contrast, the presence of niobium substantially retards the kinetics of particle embrittlement in steel containing 120–200 ppm N. Observations of AlN precipitates coated with Nb(C,N) indicate that the decreases in embrittlement kinetics are related to a reduction in the potential for AlN ripening during austenitization.« less

  17. Collisional disruptions of rotating targets

    NASA Astrophysics Data System (ADS)

    Ševeček, Pavel; Broz, Miroslav

    2017-10-01

    Collisions are key processes in the evolution of the Main Asteroid Belt and impact events - i.e. target fragmentation and gravitational reaccumulation - are commonly studied by numerical simulations, namely by SPH and N-body methods. In our work, we extend the previous studies by assuming rotating targets and we study the dependence of resulting size-distributions on the pre-impact rotation of the target. To obtain stable initial conditions, it is also necessary to include the self-gravity already in the fragmentation phase which was previously neglected.To tackle this problem, we developed an SPH code, accelerated by SSE/AVX instruction sets and parallelized. The code solves the standard set of hydrodynamic equations, using the Tillotson equation of state, von Mises criterion for plastic yielding and scalar Grady-Kipp model for fragmentation. We further modified the velocity gradient by a correction tensor (Schäfer et al. 2007) to ensure a first-order conservation of the total angular momentum. As the intact target is a spherical body, its gravity can be approximated by a potential of a homogeneous sphere, making it easy to set up initial conditions. This is however infeasible for later stages of the disruption; to this point, we included the Barnes-Hut algorithm to compute the gravitational accelerations, using a multipole expansion of distant particles up to hexadecapole order.We tested the code carefully, comparing the results to our previous computations obtained with the SPH5 code (Benz and Asphaug 1994). Finally, we ran a set of simulations and we discuss the difference between the synthetic families created by rotating and static targets.

  18. Rates for neutron-capture reactions on tungsten isotopes in iron meteorites. [Abstract only

    NASA Technical Reports Server (NTRS)

    Masarik, J.; Reedy, R. C.

    1994-01-01

    High-precision W isotopic analyses by Harper and Jacobsen indicate the W-182/W-183 ratio in the Toluca iron meteorite is shifted by -(3.0 +/- 0.9) x 10(exp -4) relative to a terrestrial standard. Possible causes of this shift are neutron-capture reactions on W during Toluca's approximately 600-Ma exposure to cosmic ray particles or radiogenic growth of W-182 from 9-Ma Hf-182 in the silicate portion of the Earth after removal of W to the Earth's core. Calculations for the rates of neutron-capture reactions on W isotopes were done to study the first possibility. The LAHET Code System (LCS) which consists of the Los Alamos High Energy Transport (LAHET) code and the Monte Carlo N-Particle(MCNP) transport code was used to numerically simulate the irradiation of the Toluca iron meteorite by galactic-cosmic-ray (GCR) particles and to calculate the rates of W(n, gamma) reactions. Toluca was modeled as a 3.9-m-radius sphere with the composition of a typical IA iron meteorite. The incident GCR protons and their interactions were modeled with LAHET, which also handled the interactions of neutrons with energies above 20 MeV. The rates for the capture of neutrons by W-182, W-183, and W-186 were calculated using the detailed library of (n, gamma) cross sections in MCNP. For this study of the possible effect of W(n, gamma) reactions on W isotope systematics, we consider the peak rates. The calculated maximum change in the normalized W-182/W-183 ratio due to neutron-capture reactions cannot account for more than 25% of the mass 182 deficit observed in Toluca W.

  19. Formation of cage-like particles by poly(amino acid)-based block copolymers in aqueous solution.

    PubMed Central

    Cudd, A; Bhogal, M; O'Mullane, J; Goddard, P

    1991-01-01

    When dissolved in N,N-dimethylformamide and then dialyzed against phosphate-buffered saline, A-B-A block copolymers composed of poly [N5-(2-hydroxyethyl)-L-glutamine]-block-poly(gamma-benzyl-L-glutamate)- block-poly [N5-(2-hydroxyethyl)-L-glutamine] form particles. The particles are cage-like structures with average diameters of 300 nm (average polydispersity, 0.3-0.5). They are stable in aqueous solution at 4 degrees C for up to 3 weeks, at which time flocculation becomes apparent. Negative staining and freeze-fracture electron microscopy suggest that cage-like particles are formed by selective association of segregated micelle populations. A model of particle formation is presented in which B blocks form micelles in dimethylformamide. On dialysis against an aqueous solution, the extended A blocks then associate intermolecularly to form rod-shaped micelles, which connect the B block micelles. The result is a meshed cage-like particle. The implications of these observations on the aggregation behavior of polymeric surfactants in dilute solution are discussed. Images PMID:11607245

  20. SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    2013-12-18

    During the first year of the SciDAC gyrokinetic particle simulation (GPS) project, the GPS team (Zhihong Lin, Liu Chen, Yasutaro Nishimura, and Igor Holod) at the University of California, Irvine (UCI) studied the tokamak electron transport driven by electron temperature gradient (ETG) turbulence, and by trapped electron mode (TEM) turbulence and ion temperature gradient (ITG) turbulence with kinetic electron effects, extended our studies of ITG turbulence spreading to core-edge coupling. We have developed and optimized an elliptic solver using finite element method (FEM), which enables the implementation of advanced kinetic electron models (split-weight scheme and hybrid model) in the SciDACmore » GPS production code GTC. The GTC code has been ported and optimized on both scalar and vector parallel computer architectures, and is being transformed into objected-oriented style to facilitate collaborative code development. During this period, the UCI team members presented 11 invited talks at major national and international conferences, published 22 papers in peer-reviewed journals and 10 papers in conference proceedings. The UCI hosted the annual SciDAC Workshop on Plasma Turbulence sponsored by the GPS Center, 2005-2007. The workshop was attended by about fifties US and foreign researchers and financially sponsored several gradual students from MIT, Princeton University, Germany, Switzerland, and Finland. A new SciDAC postdoc, Igor Holod, has arrived at UCI to initiate global particle simulation of magnetohydrodynamics turbulence driven by energetic particle modes. The PI, Z. Lin, has been promoted to the Associate Professor with tenure at UCI.« less

  1. PFMCal : Photonic force microscopy calibration extended for its application in high-frequency microrheology

    NASA Astrophysics Data System (ADS)

    Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.

    2017-11-01

    The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.

  2. Preliminary investigation of parasitic radioisotope production using the LANL IPF secondary neutron flux

    NASA Astrophysics Data System (ADS)

    Engle, J. W.; Kelsey, C. T.; Bach, H.; Ballard, B. D.; Fassbender, M. E.; John, K. D.; Birnbaum, E. R.; Nortier, F. M.

    2012-12-01

    In order to ascertain the potential for radioisotope production and material science studies using the Isotope Production Facility at Los Alamos National Lab, a two-pronged investigation has been initiated. The Monte Carlo for Neutral Particles eXtended (MCNPX) code has been used in conjunction with the CINDER 90 burnup code to predict neutron flux energy distributions as a result of routine irradiations and to estimate yields of radioisotopes of interest for hypothetical irradiation conditions. A threshold foil activation experiment is planned to study the neutron flux using measured yields of radioisotopes, quantified by HPGe gamma spectroscopy, from representative nuclear reactions with known thresholds up to 50 MeV.

  3. CoMD Implementation Suite in Emerging Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque, Riyaz; Reeve, Sam; Juallmes, Luc

    CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecular dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended tomore » support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.« less

  4. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  5. Simulations of toroidal Alfvén eigenmode excited by fast ions on the Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan

    2018-05-01

    Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.

  6. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  7. Continuous Energy Photon Transport Implementation in MCATK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Terry R.; Trahan, Travis John; Sweezy, Jeremy Ed

    2016-10-31

    The Monte Carlo Application ToolKit (MCATK) code development team has implemented Monte Carlo photon transport into the MCATK software suite. The current particle transport capabilities in MCATK, which process the tracking and collision physics, have been extended to enable tracking of photons using the same continuous energy approximation. We describe the four photoatomic processes implemented, which are coherent scattering, incoherent scattering, pair-production, and photoelectric absorption. The accompanying background, implementation, and verification of these processes will be presented.

  8. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    NASA Astrophysics Data System (ADS)

    Baushev, A. N.; del Valle, L.; Campusano, L. E.; Escala, A.; Muñoz, R. R.; Palma, G. A.

    2017-05-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a `core-cusp problem' can properly be used to question the validity of the CDM model.

  9. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baushev, A.N.; Valle, L. del; Campusano, L.E.

    2017-05-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals ofmore » motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.« less

  10. A Nanometer Aerosol Size Analyzer (nASA) for Rapid Measurement of High-concentration Size Distributions

    NASA Astrophysics Data System (ADS)

    Han, Hee-Siew; Chen, Da-Ren; Pui, David Y. H.; Anderson, Bruce E.

    2000-03-01

    We have developed a fast-response nanometer aerosol size analyzer (nASA) that is capable of scanning 30 size channels between 3 and 100 nm in a total time of 3 s. The analyzer includes a bipolar charger (Po210), an extended-length nanometer differential mobility analyzer (Nano-DMA), and an electrometer (TSI 3068). This combination of components provides particle size spectra at a scan rate of 0.1 s per channel free of uncertainties caused by response-time-induced smearing. The nASA thus offers a fast response for aerosol size distribution measurements in high-concentration conditions and also eliminates the need for applying a de-smearing algorithm to resulting data. In addition, because of its thermodynamically stable means of particle detection, the nASA is useful for applications requiring measurements over a broad range of sample pressures and temperatures. Indeed, experimental transfer functions determined for the extended-length Nano-DMA using the tandem differential mobility analyzer (TDMA) technique indicate the nASA provides good size resolution at pressures as low as 200 Torr. Also, as was demonstrated in tests to characterize the soot emissions from the J85-GE engine of a T-38 aircraft, the broad dynamic concentration range of the nASA makes it particularly suitable for studies of combustion or particle formation processes. Further details of the nASA performance as well as results from calibrations, laboratory tests and field applications are presented below.

  11. A Nanometer Aerosol Size Analyzer (nASA) for Rapid Measurement of High-Concentration Size Distributions

    NASA Technical Reports Server (NTRS)

    Han, Hee-Siew; Chen, Da-Ren; Pui, David Y. H.; Anderson, Bruce E.

    2001-01-01

    We have developed a fast-response Nanometer Aerosol Size Analyzer (nASA) that is capable of scanning 30 size channels between 3 and 100 nm in a total time of 3 seconds. The analyzer includes a bipolar charger (P0210), an extended-length Nanometer Differential Mobility Analyzer (Nano-DMA), and an electrometer (TSI 3068). This combination of components provides particle size spectra at a scan rate of 0.1 second per channel free of uncertainties caused by response-time-induced smearing. The nASA thus offers a fast response for aerosol size distribution measurements in high-concentration conditions and also eliminates the need for applying a de-smearing algorithm to resulting data. In addition, because of its thermodynamically stable means of particle detection, the nASA is useful for applications requiring measurements over a broad range of sample pressures and temperatures. Indeed, experimental transfer functions determined for the extended-length Nano-DMA using the Tandem Differential Mobility Analyzer (TDMA) technique indicate the nASA provides good size resolution at pressures as low as 200 Torr. Also, as was demonstrated in tests to characterize the soot emissions from the J85-GE engine of a T38 aircraft, the broad dynamic concentration range of the nASA makes it particularly suitable for studies of combustion or particle formation processes. Further details of the nASA performance as well as results from calibrations, laboratory tests and field applications are presented.

  12. A fast low-to-high confinement mode bifurcation dynamics in the boundary-plasma gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Ku, S.; Chang, C. S.; Hager, R.; Churchill, R. M.; Tynan, G. R.; Cziegler, I.; Greenwald, M.; Hughes, J.; Parker, S. E.; Adams, M. F.; D'Azevedo, E.; Worley, P.

    2018-05-01

    A fast edge turbulence suppression event has been simulated in the electrostatic version of the gyrokinetic particle-in-cell code XGC1 in a realistic diverted tokamak edge geometry under neutral particle recycling. The results show that the sequence of turbulent Reynolds stress followed by neoclassical ion orbit-loss driven together conspire to form the sustaining radial electric field shear and to quench turbulent transport just inside the last closed magnetic flux surface. The main suppression action is located in a thin radial layer around ψN≃0.96 -0.98 , where ψN is the normalized poloidal flux, with the time scale ˜0.1 ms.

  13. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  14. Particle production of a graphite target system for the intensity frontier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X.; Kirk, H.; McDonald, K. T.

    2015-05-03

    A solid graphite target system is considered for an intense muon and/or neutrino source in support of physics at the intensity frontier. We previously optimized the geometric parameters of the beam and target to maximize particle production at low energies by incoming protons with kinetic energy of 6.75 GeV and an rms geometric emittance of 5 mm-mrad using the MARS15(2014) code. In this study, we ran MARS15 with ROOT-based geometry and also considered a mercury-jet target as an upgrade option. The optimization was extended to focused proton beams with transverse emittances from 5 to 50 mm-mrad, showing that the particlemore » production decreases slowly with increasing emittance. We also studied beam-dump configurations to suppress the rate of undesirable high-energy secondary particles in the beam.« less

  15. Computational tools and lattice design for the PEP-II B-Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Irwin, J.; Nosochkov, Y.

    1997-02-01

    Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}

  16. Computational tools and lattice design for the PEP-II B-Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Yunhai; Irwin, John; Nosochkov, Yuri

    1997-02-01

    Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.

  17. PoMiN: A Post-Minkowskian N-Body Solver

    NASA Astrophysics Data System (ADS)

    Feng, Justin; Baumann, Mark; Hall, Bryton; Doss, Joel; Spencer, Lucas; Matzner, Richard

    2018-05-01

    PoMiN is a lightweight N-body code based on the Post-Minkowskian N-body Hamiltonian of Ledvinka, Schafer, and Bicak, which includes General Relativistic effects up to first order in Newton's constant G, and all orders in the speed of light c. PoMiN is a single file written in C and uses a fourth-order Runge-Kutta integration scheme. PoMiN has also been written to handle an arbitrary number of particles (both massive and massless) with a computational complexity that scales as O(N^2).

  18. Frequency-domain algorithm for the Lorenz-gauge gravitational self-force

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp; Warburton, Niels; Barack, Leor

    2013-11-01

    State-of-the-art computations of the gravitational self-force (GSF) on massive particles in black hole spacetimes involve numerical evolution of the metric perturbation equations in the time domain, which is computationally very costly. We present here a new strategy based on a frequency-domain treatment of the perturbation equations, which offers considerable computational saving. The essential ingredients of our method are (i) a Fourier-harmonic decomposition of the Lorenz-gauge metric perturbation equations and a numerical solution of the resulting coupled set of ordinary equations with suitable boundary conditions; (ii) a generalized version of the method of extended homogeneous solutions [L. Barack, A. Ori, and N. Sago, Phys. Rev. D 78, 084021 (2008)] used to circumvent the Gibbs phenomenon that would otherwise hamper the convergence of the Fourier mode sum at the particle’s location; (iii) standard mode-sum regularization, which finally yields the physical GSF as a sum over regularized modal contributions. We present a working code that implements this strategy to calculate the Lorenz-gauge GSF along eccentric geodesic orbits around a Schwarzschild black hole. The code is far more efficient than existing time-domain methods; the gain in computation speed (at a given precision) is about an order of magnitude at an eccentricity of 0.2, and up to 3 orders of magnitude for circular or nearly circular orbits. This increased efficiency was crucial in enabling the recently reported calculation of the long-term orbital evolution of an extreme mass ratio inspiral [N. Warburton, S. Akcay, L. Barack, J. R. Gair, and N. Sago, Phys. Rev. D 85, 061501(R) (2012)]. Here we provide full technical details of our method to complement the above report.

  19. NIMROD Modeling of Sawtooth Modes Using Hot-Particle Closures

    NASA Astrophysics Data System (ADS)

    Kruger, Scott; Jenkins, T. G.; Held, E. D.; King, J. R.

    2015-11-01

    In DIII-D shot 96043, RF heating gives rise to an energetic ion population that alters the sawtooth stability boundary, replacing conventional sawtooth cycles by longer-period, larger-amplitude `giant sawtooth' oscillations. We explore the use of particle-in-cell closures within the NIMROD code to numerically represent the RF-induced hot-particle distribution, and investigate the role of this distribution in determining the altered mode onset threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior, such as the plasma Lundquist number and hot-particle beta-fraction, are also considered. The fast energetic particles present many challenges for the PIC closure. We review new algorithm and performance improvements to address these challenges, and provide a preliminary assessment of the efficacy of the PIC closure versus a continuum model for energetic particle modeling. We also compare our results with those of, and discuss plans for a more complete validation campaign for this discharge. Supported by US Department of Energy via the SciDAC Center for Extended MHD Modeling (CEMM).

  20. Alternate Operating Modes For NDCX-II

    NASA Astrophysics Data System (ADS)

    Sharp, W. M.; Friedman, A.; Grote, D. P.; Cohen, R. H.; Lund, S. M.; Vay, J.-L.; Waldron, W. L.

    2012-10-01

    NDCX-II is a newly completed accelerator facility at LBNL, built to study ion-heated warm dense matter and aspects of ion-driven targets for inertial-fusion energy. The baseline design calls for using twelve induction cells to accelerate 40 nC of Li+ ions to 1.2 MeV. During commissioning, though, we plan to extend the source lifetime by extracting less total charge. For operational flexibility, the option of using a helium plasma source is also being investigated. Over time, we expect that NDCX-II will be upgraded to substantially higher energies, necessitating the use of heavier ions to keep a suitable deposition range in targets. Each of these options requires development of an alternate acceleration schedule and the associated transverse focusing. The schedules here are first worked out with a fast-running 1-D particle-in-cell code ASP, then 2-D and 3-D Warp simulations are used to verify the 1-D results and to design transverse focusing.

  1. ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems.

    PubMed

    Niethammer, Christoph; Becker, Stefan; Bernreuther, Martin; Buchholz, Martin; Eckhardt, Wolfgang; Heinecke, Alexander; Werth, Stephan; Bungartz, Hans-Joachim; Glass, Colin W; Hasse, Hans; Vrabec, Jadran; Horsch, Martin

    2014-10-14

    The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.

  2. CICART Center For Integrated Computation And Analysis Of Reconnection And Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharjee, Amitava

    CICART is a partnership between the University of New Hampshire (UNH) and Dartmouth College. CICART addresses two important science needs of the DoE: the basic understanding of magnetic reconnection and turbulence that strongly impacts the performance of fusion plasmas, and the development of new mathematical and computational tools that enable the modeling and control of these phenomena. The principal participants of CICART constitute an interdisciplinary group, drawn from the communities of applied mathematics, astrophysics, computational physics, fluid dynamics, and fusion physics. It is a main premise of CICART that fundamental aspects of magnetic reconnection and turbulence in fusion devices, smaller-scalemore » laboratory experiments, and space and astrophysical plasmas can be viewed from a common perspective, and that progress in understanding in any of these interconnected fields is likely to lead to progress in others. The establishment of CICART has strongly impacted the education and research mission of a new Program in Integrated Applied Mathematics in the College of Engineering and Applied Sciences at UNH by enabling the recruitment of a tenure-track faculty member, supported equally by UNH and CICART, and the establishment of an IBM-UNH Computing Alliance. The proposed areas of research in magnetic reconnection and turbulence in astrophysical, space, and laboratory plasmas include the following topics: (A) Reconnection and secondary instabilities in large high-Lundquist-number plasmas, (B) Particle acceleration in the presence of multiple magnetic islands, (C) Gyrokinetic reconnection: comparison with fluid and particle-in-cell models, (D) Imbalanced turbulence, (E) Ion heating, and (F) Turbulence in laboratory (including fusion-relevant) experiments. These theoretical studies make active use of three high-performance computer simulation codes: (1) The Magnetic Reconnection Code, based on extended two-fluid (or Hall MHD) equations, in an Adaptive Mesh Refinement (AMR) framework, (2) the Particle Simulation Code, a fully electromagnetic 3D Particle-In-Cell (PIC) code that includes a collision operator, and (3) GS2, an Eulerian, electromagnetic, kinetic code that is widely used in the fusion program, and simulates the nonlinear gyrokinetic equations, together with a self-consistent set of Maxwell’s equations.« less

  3. Compressional Alfvén eigenmodes in rotating spherical tokamak plasmas

    DOE PAGES

    Smith, H. M.; Fredrickson, E. D.

    2017-02-07

    Spherical tokamaks often have a considerable toroidal plasma rotation of several tens of kHz. Compressional Alfvén eigenmodes in such devices therefore experience a frequency shift, which if the plasma were rotating as a rigid body, would be a simple Doppler shift. However, since the rotation frequency depends on minor radius, the eigenmodes are affected in a more complicated way. The eigenmode solver CAE3B (Smith et al 2009 Plasma Phys. Control. Fusion 51 075001) has been extended to account for toroidal plasma rotation. The results show that the eigenfrequency shift due to rotation can be approximated by a rigid body rotationmore » with a frequency computed from a spatial average of the real rotation profile weighted with the eigenmode amplitude. To investigate the effect of extending the computational domain to the vessel wall, a simplified eigenmode equation, yet retaining plasma rotation, is solved by a modified version of the CAE code used in Fredrickson et al (2013 Phys. Plasmas 20 042112). Lastly, both solving the full eigenmode equation, as in the CAE3B code, and placing the boundary at the vessel wall, as in the CAE code, significantly influences the calculated eigenfrequencies.« less

  4. Hawking Radiation of the Charged Particles via Tunneling from the ( n+2)-Dimensional Topological Reissner-Nordström-de Sitter Black Hole

    NASA Astrophysics Data System (ADS)

    Yan, Han

    2012-08-01

    Extending Parikh-Wilczek's semi-classical tunneling method, we discuss the Hawking radiation of the charged massive particles via tunneling from the cosmological horizon of ( n+2)-dimensional Topological Reissner-Nordström-de Sitter black hole.The result shows that, when energy conservation and electric charge conservation are taken into account, the derived spectrum deviates from the pure thermal one, but satisfies the unitary theory, which provides a probability for the solution of the information loss paradox.

  5. SU-E-T-656: Quantitative Analysis of Proton Boron Fusion Therapy (PBFT) in Various Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, D; Jung, J; Shin, H

    2015-06-15

    Purpose: Three alpha particles are concomitant of proton boron interaction, which can be used in radiotherapy applications. We performed simulation studies to determine the effectiveness of proton boron fusion therapy (PBFT) under various conditions. Methods: Boron uptake regions (BURs) of various widths and densities were implemented in Monte Carlo n-particle extended (MCNPX) simulation code. The effect of proton beam energy was considered for different BURs. Four simulation scenarios were designed to verify the effectiveness of integrated boost that was observed in the proton boron reaction. In these simulations, the effect of proton beam energy was determined for different physical conditions,more » such as size, location, and boron concentration. Results: Proton dose amplification was confirmed for all proton beam energies considered (< 96.62%). Based on the simulation results for different physical conditions, the threshold for the range in which proton dose amplification occurred was estimated as 0.3 cm. Effective proton boron reaction requires the boron concentration to be equal to or greater than 14.4 mg/g. Conclusion: We established the effects of the PBFT with various conditions by using Monte Carlo simulation. The results of our research can be used for providing a PBFT dose database.« less

  6. Numerical simulation of ion charge breeding in electron beam ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, L., E-mail: zhao@far-tech.com; Kim, Jin-Soo

    2014-02-15

    The Electron Beam Ion Source particle-in-cell code (EBIS-PIC) tracks ions in an EBIS electron beam while updating electric potential self-consistently and atomic processes by the Monte Carlo method. Recent improvements to the code are reported in this paper. The ionization module has been improved by using experimental ionization energies and shell effects. The acceptance of injected ions and the emittance of extracted ion beam are calculated by extending EBIS-PIC to the beam line transport region. An EBIS-PIC simulation is performed for a Cs charge-breeding experiment at BNL. The charge state distribution agrees well with experiments, and additional simulation results ofmore » radial profiles and velocity space distributions of the trapped ions are presented.« less

  7. Monte Carlo parametric studies of neutron interrogation with the Associated Particle Technique for cargo container inspections

    NASA Astrophysics Data System (ADS)

    Deyglun, Clément; Carasco, Cédric; Pérot, Bertrand

    2014-06-01

    The detection of Special Nuclear Materials (SNM) by neutron interrogation is extensively studied by Monte Carlo simulation at the Nuclear Measurement Laboratory of CEA Cadarache (French Alternative Energies and Atomic Energy Commission). The active inspection system is based on the Associated Particle Technique (APT). Fissions induced by tagged neutrons (i.e. correlated to an alpha particle in the DT neutron generator) in SNM produce high multiplicity coincidences which are detected with fast plastic scintillators. At least three particles are detected in a short time window following the alpha detection, whereas nonnuclear materials mainly produce single events, or pairs due to (n,2n) and (n,n'γ) reactions. To study the performances of an industrial cargo container inspection system, Monte Carlo simulations are performed with the MCNP-PoliMi transport code, which records for each neutron history the relevant information: reaction types, position and time of interactions, energy deposits, secondary particles, etc. The output files are post-processed with a specific tool developed with ROOT data analysis software. Particles not correlated with an alpha particle (random background), counting statistics, and time-energy resolutions of the data acquisition system are taken into account in the numerical model. Various matrix compositions, suspicious items, SNM shielding and positions inside the container, are simulated to assess the performances and limitations of an industrial system.

  8. A fast low-to-high confinement mode bifurcation dynamics in the boundary-plasma gyrokinetic code XGC1

    DOE PAGES

    Ku, S.; Chang, C. S.; Hager, R.; ...

    2018-04-18

    Here, a fast edge turbulence suppression event has been simulated in the electrostatic version of the gyrokinetic particle-in-cell code XGC1 in a realistic diverted tokamak edge geometry under neutral particle recycling. The results show that the sequence of turbulent Reynolds stress followed by neoclassical ion orbit-loss driven together conspire to form the sustaining radial electric field shear and to quench turbulent transport just inside the last closed magnetic flux surface. As a result, the main suppression action is located in a thin radial layer around ψ N≃0.96–0.98, where ψ N is the normalized poloidal flux, with the time scale ~0.1more » ms.« less

  9. Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic

    2015-11-01

    The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.

  10. Dosimetric and microdosimetric analyses for blood exposed to reactor-derived thermal neutrons.

    PubMed

    Ali, F; Atanackovic, J; Boyer, C; Festarini, A; Kildea, J; Paterson, L C; Rogge, R; Stuart, M; Richardson, R B

    2018-06-06

    Thermal neutrons are found in reactor, radiotherapy, aircraft, and space environments. The purpose of this study was to characterise the dosimetry and microdosimetry of thermal neutron exposures, using three simulation codes, as a precursor to quantitative radiobiological studies using blood samples. An irradiation line was designed employing a pyrolytic graphite crystal or-alternatively-a super mirror to expose blood samples to thermal neutrons from the National Research Universal reactor to determine radiobiological parameters. The crystal was used when assessing the relative biological effectiveness for dicentric chromosome aberrations, and other biomarkers, in lymphocytes over a low absorbed dose range of 1.2-14 mGy. Higher exposures using a super mirror will allow the additional quantification of mitochondrial responses. The physical size of the thermal neutron fields and their respective wavelength distribution was determined using the McStas Monte Carlo code. Spinning the blood samples produced a spatially uniform absorbed dose as determined from Monte Carlo N-Particle version 6 simulations. The major part (71%) of the total absorbed dose to blood was determined to be from the 14 N(n,p) 14 C reaction and the remainder from the 1 H(n,γ) 2 H reaction. Previous radiobiological experiments at Canadian Nuclear Laboratories involving thermal neutron irradiation of blood yielded a relative biological effectiveness of 26 ± 7. Using the Particle and Heavy Ion Transport Code System, a similar value of ∼19 for the quality factor of thermal neutrons initiating the 14 N(n,p) 14 C reaction in soft tissue was determined by microdosimetric simulations. This calculated quality factor is of similar high value to the experimentally-derived relative biological effectiveness, and indicates the potential of thermal neutrons to induce deleterious health effects in superficial organs such as cataracts of the eye lens.

  11. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  12. A determination of the L dependence of the radial diffusion coefficient for protons in Jupiter's inner magnetosphere

    NASA Technical Reports Server (NTRS)

    Thomsen, M. F.; Goertz, C. K.; Van Allen, J. A.

    1977-01-01

    In a previous paper (Thomsen et al., 1977), a technique was proposed for estimating the radial diffusion coefficient (n) in the inner magnetosphere of Jupiter from the observations of the sweeping effect of the inner Jovian satellites on the fluxes of the energetic charged particles. The present paper extends this technique to permit the unique identification of the parameters D sub O and n, where the diffusion coefficient is assumed to be of the form D = D sub O L to the nth. The derived value of D sub O depends directly on assumptions regarding the nature and efficiency of the loss mechanism operating on the particles, while the value of n depends only on the assumed width of the loss region. The extended technique is applied to the University of Iowa Pioneer 11 proton data, leading to values of n of about O and D(6) of about 3 x 10 to the -8th (R sub J)-squared/sec, when satellite sweepup losses are assumed to be the only loss operating on the protons. The small value of n is strong evidence that the radial diffusion is driven by ionospheric winds.

  13. Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.

    PubMed

    Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans

    2018-01-01

    Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.

  14. A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromley, Benjamin C.; Kenyon, Scott J., E-mail: bromley@physics.utah.edu, E-mail: skenyon@cfa.harvard.edu

    2011-04-20

    We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in amore » swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M{sub sun} disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller {alpha} form more massive planets than disks with larger {alpha}. For Jupiter-mass planets, masses of solid cores are 10-100 M{sub +}.« less

  15. Brief Report: Repetitive Behaviors in Young Children with Autism Spectrum Disorder and Developmentally Similar Peers--A Follow Up to Watt et al. (2008)

    ERIC Educational Resources Information Center

    Barber, Angela B.; Wetherby, Amy M.; Chambers, Nola W.

    2012-01-01

    The present study extended the findings of Watt et al. (J Autism Dev Disord 38:1518-1533, 2008) by investigating repetitive and stereotyped behaviors (RSB) demonstrated by children (n = 50) and typical development (TD; n = 50) matched on developmental age, gender, and parents' education level. RSB were coded from videotaped Communication and…

  16. MCNP Version 6.2 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, Christopher John; Bull, Jeffrey S.; Solomon, C. J.

    Monte Carlo N-Particle or MCNP ® is a general-purpose Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP Version 6.2 follows the MCNP6.1.1 beta version and has been released in order to provide the radiation transport community with the latest feature developments and bug fixes for MCNP. Since the last release of MCNP major work has been conducted to improve the code base, add features, and provide tools to facilitate ease of use of MCNP version 6.2 as well as the analysis of results. These release notes serve as a general guidemore » for the new/improved physics, source, data, tallies, unstructured mesh, code enhancements and tools. For more detailed information on each of the topics, please refer to the appropriate references or the user manual which can be found at http://mcnp.lanl.gov. This release of MCNP version 6.2 contains 39 new features in addition to 172 bug fixes and code enhancements. There are still some 33 known issues the user should familiarize themselves with (see Appendix).« less

  17. OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon

    2010-10-01

    Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.

  18. Space Radiation Transport Code Development: 3DHZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The space radiation transport code, HZETRN, has been used extensively for research, vehicle design optimization, risk analysis, and related applications. One of the simplifying features of the HZETRN transport formalism is the straight-ahead approximation, wherein all particles are assumed to travel along a common axis. This reduces the governing equation to one spatial dimension allowing enormous simplification and highly efficient computational procedures to be implemented. Despite the physical simplifications, the HZETRN code is widely used for space applications and has been found to agree well with fully 3D Monte Carlo simulations in many circumstances. Recent work has focused on the development of 3D transport corrections for neutrons and light ions (Z < 2) for which the straight-ahead approximation is known to be less accurate. Within the development of 3D corrections, well-defined convergence criteria have been considered, allowing approximation errors at each stage in model development to be quantified. The present level of development assumes the neutron cross sections have an isotropic component treated within N explicit angular directions and a forward component represented by the straight-ahead approximation. The N = 1 solution refers to the straight-ahead treatment, while N = 2 represents the bi-directional model in current use for engineering design. The figure below shows neutrons, protons, and alphas for various values of N at locations in an aluminum sphere exposed to a solar particle event (SPE) spectrum. The neutron fluence converges quickly in simple geometry with N > 14 directions. The improved code, 3DHZETRN, transports neutrons, light ions, and heavy ions under space-like boundary conditions through general geometry while maintaining a high degree of computational efficiency. A brief overview of the 3D transport formalism for neutrons and light ions is given, and extensive benchmarking results with the Monte Carlo codes Geant4, FLUKA, and PHITS are provided for a variety of boundary conditions and geometries. Improvements provided by the 3D corrections are made clear in the comparisons. Developments needed to connect 3DHZETRN to vehicle design and optimization studies will be discussed. Future theoretical development will relax the forward plus isotropic interaction assumption to more general angular dependence.

  19. Computer simulation of plasma and N-body problems

    NASA Technical Reports Server (NTRS)

    Harries, W. L.; Miller, J. B.

    1975-01-01

    The following FORTRAN language computer codes are presented: (1) efficient two- and three-dimensional central force potential solvers; (2) a three-dimensional simulator of an isolated galaxy which incorporates the potential solver; (3) a two-dimensional particle-in-cell simulator of the Jeans instability in an infinite self-gravitating compressible gas; and (4) a two-dimensional particle-in-cell simulator of a rotating self-gravitating compressible gaseous system of which rectangular coordinate and superior polar coordinate versions were written.

  20. Implementation of a 3D version of ponderomotive guiding center solver in particle-in-cell code OSIRIS

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2016-10-01

    Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.

  1. Anisotropic diffusion in mesh-free numerical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2017-04-01

    We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.

  2. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  3. Estimation of relative biological effectiveness for boron neutron capture therapy using the PHITS code coupled with a microdosimetric kinetic model

    PubMed Central

    Horiguchi, Hironori; Sato, Tatsuhiko; Kumada, Hiroaki; Yamamoto, Tetsuya; Sakae, Takeji

    2015-01-01

    Abstract The absorbed doses deposited by boron neutron capture therapy (BNCT) can be categorized into four components: α and 7Li particles from the 10B(n, α)7Li reaction, 0.54-MeV protons from the 14N(n, p)14C reaction, the recoiled protons from the 1H(n, n) 1H reaction, and photons from the neutron beam and 1H(n, γ)2H reaction. For evaluating the irradiation effect in tumors and the surrounding normal tissues in BNCT, it is of great importance to estimate the relative biological effectiveness (RBE) for each dose component in the same framework. We have, therefore, established a new method for estimating the RBE of all BNCT dose components on the basis of the microdosimetric kinetic model. This method employs the probability density of lineal energy, y, in a subcellular structure as the index for expressing RBE, which can be calculated using the microdosimetric function implemented in the particle transport simulation code (PHITS). The accuracy of this method was tested by comparing the calculated RBE values with corresponding measured data in a water phantom irradiated with an epithermal neutron beam. The calculation technique developed in this study will be useful for biological dose estimation in treatment planning for BNCT. PMID:25428243

  4. Hybrid simulations of Alfvén modes driven by energetic particles

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Ma, Z. W.; Wang, S.

    2016-12-01

    A hybrid kinetic-magnetohydrodynamic code (CLT-K) is developed to study nonlinear dynamics of Alfvén modes driven by energetic particles (EP). A n = 2 toroidicity-induced discrete shear Alfvén eigenmode (TAE)-type energetic particle mode (EPM) with two dominant poloidal harmonics (m = 2 and 3) is first excited and its frequency remains unchanged in the early phase. Later, a new branch of the n = 2 frequency with a single dominant poloidal mode (m = 3) splits from the original TAE-type EPM. The new single m EPM (m = 3) slowly moves radially outward with the downward chirping of the frequency and the mode amplitude remains at a higher level. The original EPM remains at its original position without the frequency chirping, but its amplitude decays with time. Finally, the m = 3 EPM becomes dominant and the frequency falls into the β-induced gap of the Alfvén continuum. The redistribution of the δf in the phase space is consistent with the mode frequency downward chirping and the drifting direction of the resonance region is mainly due to the biased free energy profile. The transition from a TAE-type EPM to a single m EPM is mainly caused by extension of the p = 0 trapped particle resonance in the phase space.

  5. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A., E-mail: apalumbo@bnl.gov; Herman, M.; Capote, R.

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  6. Overview of transport, fast particle and heating and current drive physics using tritium in JET plasmas

    NASA Astrophysics Data System (ADS)

    Stork, D.; Baranov, Yu.; Belo, P.; Bertalot, L.; Borba, D.; Brzozowski, J. H.; Challis, C. D.; Ciric, D.; Conroy, S.; de Baar, M.; de Vries, P.; Dumortier, P.; Garzotti, L.; Hawkes, N. C.; Hender, T. C.; Joffrin, E.; Jones, T. T. C.; Kiptily, V.; Lamalle, P.; Mailloux, J.; Mantsinen, M.; McDonald, D. C.; Nave, M. F. F.; Neu, R.; O'Mullane, M.; Ongena, J.; Pearce, R. J.; Popovichev, S.; Sharapov, S. E.; Stamp, M.; Stober, J.; Surrey, E.; Valovic, M.; Voitsekhovitch, I.; Weisen, H.; Whiteford, A. D.; Worth, L.; Yavorskij, V.; Zastrow, K.-D.; EFDA contributors, JET

    2005-10-01

    Results are presented from the JET Trace Tritium Experimental (TTE) campaign using minority tritium (T) plasmas (nT/nD < 3%). Thermal tritium particle transport coefficients (DT, vT) are found to exceed neo-classical values in all regimes, except in ELMy H-modes at high densities and in the region of internal transport barriers (ITBs) in reversed shear plasmas. In ELMy H-mode dimensionless parameter scans, at q95 ~ 2.8 and triangularity δ = 0.2, the T particle transport scales in a gyro-Bohm manner in the inner plasma (r/a < 0.4), whilst the outer plasma particle transport scaling is more Bohm-like. Dimensionless parameter scans show contrasting behaviour for the trace particle confinement (increases with collisionality, ν* and β) and bulk energy confinement (decreases with ν* and is independent of β). In an extended ELMy H-mode data set, with ρ*, ν*, β and q varied but with neo-classical tearing modes (NTMs) either absent or limited to weak, benign core modes (4/3 or above), the multiparameter fit to the normalized diffusion coefficient in the outer plasma (0.65 < r/a < 0.8) gives DT/Bphi ~ ρ*2.46ν*-0.23β-1.01q2.03. In hybrid scenarios (qmin ~ 1, low positive shear, no sawteeth), the T particle confinement is found to scale with increasing triangularity and plasma current. Comparing regimes (ELMy H-mode, ITB plasma and hybrid scenarios) in the outer plasma region, a correlation of high values of DT with high values of vT is seen. The normalized diffusion coefficients for the hybrid and ITB scenarios do not fit the scaling derived for ELMy H-modes. The normalized tritium diffusion scales with normalized poloidal Larmor radius (\\rho_{\\theta}^\\ast=q\\rho^{\\ast}) in a manner close to gyro-Bohm ({\\sim}\\rho_{\\theta}^{\\ast 3}) , with an added inverse β dependence. The effects of ELMs, sawteeth and NTMs on the T particle transport are described. Fast-ion confinement in current-hole (CH) plasmas was tested in TTE by tritium neutral beam injection into JET CH plasmas. γ-rays from the reactions of fusion alpha and beryllium impurities (9Be(α, nγ)12C) characterized the fast fusion-alpha population evolution. The γ-decay times are consistent with classical alpha plus parent fast triton slowing down times (τTs + ταs) for high plasma currents (Ip > 2 MA) and monotonic q-profiles. In CH discharges the γ-ray emission decay times are much lower than classical (τTs+ταs), indicating alpha confinement degradation, due to the orbit losses and particle orbit drift predicted by a 3-D Fokker-Planck numerical code and modelled using TRANSP.

  7. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  9. Axisymmetric Tandem Mirrors: Stabilization and Confinement Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, R.F.; Fowler, T.K.; Bulmer, R.

    2005-01-15

    The 'Kinetic Stabilizer' has been proposed as a means of MHD stabilizing an axisymmetric tandem mirror system. The K-S concept is based on theoretical studies by Ryutov, confirmed experimentally in the Gas Dynamic Trap experiment in Novosibirsk. In the K-S beams of ions are directed into the end of an 'expander' region outside the outer mirror of a tandem mirror. These ions, slowed, stagnated, and reflected as they move up the magnetic gradient, produce a low-density stabilizing plasma.At the Lawrence Livermore National Laboratory we have been conducting theoretical and computational studies of the K-S Tandem Mirror. These studies have employedmore » a low-beta code written especially to analyze the beam injection/stabilization process,and a new code SYMTRAN (by Hua and Fowler)that solves the coupled radial and axial particle and energy transport in a K-S T-M. Also, a 'legacy' MHD stability code, FLORA, has been upgraded and employed to benchmark the injection/stabilization code and to extend its results to high beta values.The FLORA code studies so far have confirmed the effectiveness of the K-S in stabilizing high-beta (40%) plasmas with stabilizer plasmas the peak pressures of which are several orders of magnitude smaller than those of the confined plasma.Also the SYMTRAN code has shown D-T plasma ignition from alpha particle energy deposition in T-M regimes with strong end plugging.Our studies have confirmed the viability of the K-S T-M concept with respect to MHD stability and radial and axial confinement. We are continuing these studies in order to optimize the parameters and to examine means for the stabilization of possible residual instability modes, such as drift modes and 'trapped-particle' modes. These modes may in principle be controlled by tailoring the stabilizer plasma distribution and/or the radial potential distribution.In the paper the results to date of our studies are summarized and projected to scope out possible fusion-power versions of the K-S T-M.« less

  10. Axisymmetric Tandem Mirrors: Stabilization and Confinement Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, R F; Fowler, T K; Bulmer, R

    2004-07-15

    The 'Kinetic Stabilizer' has been proposed as a means of MHD stabilizing an axisymmetric tandem mirror system. The K-S concept is based on theoretical studies by Ryutov, confirmed experimentally in the Gas Dynamic Trap experiment in Novosibirsk. In the K-S beams of ions are directed into the end of an 'expander' region outside the outer mirror of a tandem mirror. These ions, slowed, stagnated, and reflected as they move up the magnetic gradient, produce a low-density stabilizing plasma. At the Lawrence Livermore National Laboratory we have been conducting theoretical and computational studies of the K-S Tandem Mirror. These studies havemore » employed a low-beta code written especially to analyze the beam injection/stabilization process, and a new code SYMTRAN (by Hua and Fowler) that solves the coupled radial and axial particle and energy transport in a K-S TM. Also, a 'legacy' MHD stability code, FLORA, has been upgraded and employed to benchmark the injection/stabilization code and to extend its results to high beta values. The FLORA code studies so far have confirmed the effectiveness of the K-S in stabilizing high-beta (40%) plasmas with stabilizer plasmas the peak pressures of which are several orders of magnitude smaller than those of the confined plasma. Also the SYMTRAN code has shown D-T plasma ignition from alpha particle energy deposition in T-M regimes with strong end plugging. Our studies have confirmed the viability of the K-S-T-M concept with respect to MHD stability and radial and axial confinement. We are continuing these studies in order to optimize the parameters and to examine means for the stabilization of possible residual instability modes, such as drift modes and 'trapped-particle' modes. These modes may in principle be controlled by tailoring the stabilizer plasma distribution and/or the radial potential distribution. In the paper the results to date of our studies are summarized and projected to scope out possible fusion-power versions of the K-S T-M« less

  11. EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.

    2014-04-01

    The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.

  12. New doubly-symmetric families of comet-like periodic orbits in the spatial restricted ( N + 1)-body problem

    NASA Astrophysics Data System (ADS)

    Llibre, Jaume; Roberto, Luci Any

    2009-07-01

    For any positive integer N ≥ 2 we prove the existence of a new family of periodic solutions for the spatial restricted ( N +1)-body problem. In these solutions the infinitesimal particle is very far from the primaries. They have large inclinations and some symmetries. In fact we extend results of Howison and Meyer (J. Diff. Equ. 163:174-197, 2000) from N = 2 to any positive integer N ≥ 2.

  13. Multitasking the three-dimensional transport code TORT on CRAY platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.; Burre, C.A.

    1996-04-01

    The multitasking options in the three-dimensional neutral particle transport code TORT originally implemented for Cray`s CTSS operating system are revived and extended to run on Cray Y/MP and C90 computers using the UNICOS operating system. These include two coarse-grained domain decompositions; across octants, and across directions within an octant, termed Octant Parallel (OP), and Direction Parallel (DP), respectively. Parallel performance of the DP is significantly enhanced by increasing the task grain size and reducing load imbalance via dynamic scheduling of the discrete angles among the participating tasks. Substantial Wall Clock speedup factors, approaching 4.5 using 8 tasks, have been measuredmore » in a time-sharing environment, and generally depend on the test problem specifications, number of tasks, and machine loading during execution.« less

  14. Two-dimensional hybrid Monte Carlo–fluid modelling of dc glow discharges: Comparison with fluid models, reliability, and accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eylenceoğlu, E.; Rafatov, I., E-mail: rafatov@metu.edu.tr; Kudryavtsev, A. A.

    2015-01-15

    Two-dimensional hybrid Monte Carlo–fluid numerical code is developed and applied to model the dc glow discharge. The model is based on the separation of electrons into two parts: the low energetic (slow) and high energetic (fast) electron groups. Ions and slow electrons are described within the fluid model using the drift-diffusion approximation for particle fluxes. Fast electrons, represented by suitable number of super particles emitted from the cathode, are responsible for ionization processes in the discharge volume, which are simulated by the Monte Carlo collision method. Electrostatic field is obtained from the solution of Poisson equation. The test calculations weremore » carried out for an argon plasma. Main properties of the glow discharge are considered. Current-voltage curves, electric field reversal phenomenon, and the vortex current formation are developed and discussed. The results are compared to those obtained from the simple and extended fluid models. Contrary to reports in the literature, the analysis does not reveal significant advantages of existing hybrid methods over the extended fluid model.« less

  15. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Cranmer, Steven R.; Wagner, William (Technical Monitor)

    2003-01-01

    The PI (Cranmer) and Co-I (A. van Ballegooijen) made significant progress toward the goal of building a "unified model" of the dominant physical processes responsible for the acceleration of the solar wind. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a one-dimensional and time-independent model of plasma heating and acceleration. The accomplishments in the report period are thus divided into these two categories: 1a. Focused Study of Kinetic MHD Turbulence. We have developed a model of magnetohydrodynamic (MHD) turbulence in the extended solar corona that contains the effects of collisionless dissipation and anisotropic particle heating. A turbulent cascade is one possible way of generating small-scale fluctuations (easy to dissipate/heat) from a pre-existing population of low-frequency Alfven waves (difficult to dissipate/heat). We modeled the cascade as a combination of advection and diffusion in wavenumber space. The dominant spectral transfer occurs in the direction perpendicular to the background magnetic field. As expected from earlier models, this leads to a highly anisotropic fluctuation spectrum with a rapidly decaying tail in the parallel wavenumber direction. The wave power that decays to high enough frequencies to become ion cyclotron resonant depends on the relative strengths of advection and diffusion in the cascade. For the most realistic values of these parameters, though, there is insufficient power to heat protons and heavy ions. The dominant oblique waves undergo Landau damping, which implies strong parallel electron heating. We thus investigated the nonlinear evolution of the electron velocity distributions (VDFs) into parallel beams and discrete phase-space holes (similar to those seen in the terrestrial magnetosphere) which are an alternate means of heating protons via stochastic interactions similar to particle-particle collisions. 1b. Focused Study of the Multi-Mode Detailed Balance Formalism. The PI began to explore the feasibility of using the "weak turbulence," or detailed-balance theory of Tsytovich, Melrose, and others to encompass the relevant physics of the solar wind. This study did not go far, however, because if the "strong" MHD turbulence discussed above is a dominant player in the wind's acceleration region, this formalism is inherently not applicable to the corona. We will continue to study the various published approaches to the weak turbulence formalism, especially with an eye on ways to parameterize nonlinear wave reflection rates. 2. Building the Unified Model Code Architecture. We have begun developing the computational model of a time-steady open flux tube in the extended corona. The model will be "unified" in the sense that it will include (simultaneously for the first time) as many of the various proposed physical processes as possible, all on equal footing. To retain this generality, we have formulated the problem in two interconnected parts: a completely kinetic model for the particles, using the Monte Carlo approach, and a finite-difference approach for the self-consistent fluctuation spectra. The two codes are run sequentially and iteratively until complete consistency is achieved. The current version of the Monte Carlo code incorporates gravity, the zero-current electric field, magnetic mirroring, and collisions. The fluctuation code incorporates WKJ3 wave action conservation and the cascade/dissipation processes discussed above. The codes are being run for various test problems with known solutions. Planned additions to the codes include prescriptions for nonlinear wave steepening, kinetic velocity-space diffusion, and multi-mode coupling (including reflection and refraction).

  16. Solution of the Skyrme-Hartree–Fock–Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VIII) HFODD (v2.73y): A new version of the program

    DOE PAGES

    Schunck, N.; Dobaczewski, J.; Satuła, W.; ...

    2017-03-27

    Here, we describe the new version (v2.73y) of the code hfodd which solves the nuclear Skyrme Hartree–Fock or Skyrme Hartree–Fock–Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following new features: (i) full proton–neutron mixing in the particle–hole channel for Skyrme functionals, (ii) the Gogny force in both particle–hole and particle–particle channels, (iii) linear multi-constraint method at finite temperature, (iv) fission toolkit including the constraint on the number of particles in the neck between two fragments, calculation of the interaction energy between fragments, and calculation of the nuclear and Coulomb energy ofmore » each fragment, (v) the new version 200d of the code hfbtho, together with an enhanced interface between HFBTHO and HFODD, (vi) parallel capabilities, significantly extended by adding several restart options for large-scale jobs, (vii) the Lipkin translational energy correction method with pairing, (viii) higher-order Lipkin particle-number corrections, (ix) interface to a program plotting single-particle energies or Routhians, (x) strong-force isospin-symmetry-breaking terms, and (xi) the Augmented Lagrangian Method for calculations with 3D constraints on angular momentum and isospin. Finally, an important bug related to the calculation of the entropy at finite temperature and several other little significant errors of the previous published version were corrected.« less

  17. Solution of the Skyrme-Hartree–Fock–Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VIII) HFODD (v2.73y): A new version of the program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunck, N.; Dobaczewski, J.; Satuła, W.

    Here, we describe the new version (v2.73y) of the code hfodd which solves the nuclear Skyrme Hartree–Fock or Skyrme Hartree–Fock–Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following new features: (i) full proton–neutron mixing in the particle–hole channel for Skyrme functionals, (ii) the Gogny force in both particle–hole and particle–particle channels, (iii) linear multi-constraint method at finite temperature, (iv) fission toolkit including the constraint on the number of particles in the neck between two fragments, calculation of the interaction energy between fragments, and calculation of the nuclear and Coulomb energy ofmore » each fragment, (v) the new version 200d of the code hfbtho, together with an enhanced interface between HFBTHO and HFODD, (vi) parallel capabilities, significantly extended by adding several restart options for large-scale jobs, (vii) the Lipkin translational energy correction method with pairing, (viii) higher-order Lipkin particle-number corrections, (ix) interface to a program plotting single-particle energies or Routhians, (x) strong-force isospin-symmetry-breaking terms, and (xi) the Augmented Lagrangian Method for calculations with 3D constraints on angular momentum and isospin. Finally, an important bug related to the calculation of the entropy at finite temperature and several other little significant errors of the previous published version were corrected.« less

  18. PIXIE3D: A Parallel, Implicit, eXtended MHD 3D Code.

    NASA Astrophysics Data System (ADS)

    Chacon, L.; Knoll, D. A.

    2004-11-01

    We report on the development of PIXIE3D, a 3D parallel, fully implicit Newton-Krylov extended primitive-variable MHD code in general curvilinear geometry. PIXIE3D employs a second-order, finite-volume-based spatial discretization that satisfies remarkable properties such as being conservative, solenoidal in the magnetic field, non-dissipative, and stable in the absence of physical dissipation.(L. Chacón , phComput. Phys. Comm.) submitted (2004) PIXIE3D employs fully-implicit Newton-Krylov methods for the time advance. Currently, first and second-order implicit schemes are available, although higher-order temporal implicit schemes can be effortlessly implemented within the Newton-Krylov framework. A successful, scalable, MG physics-based preconditioning strategy, similar in concept to previous 2D MHD efforts,(L. Chacón et al., phJ. Comput. Phys). 178 (1), 15- 36 (2002); phJ. Comput. Phys., 188 (2), 573-592 (2003) has been developed. We are currently in the process of parallelizing the code using the PETSc library, and a Newton-Krylov-Schwarz approach for the parallel treatment of the preconditioner. In this poster, we will report on both the serial and parallel performance of PIXIE3D, focusing primarily on scalability and CPU speedup vs. an explicit approach.

  19. Linear calculations of edge current driven kink modes with BOUT++ code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y.; Lawrence Livermore National Laboratory, Livermore, California 94550

    This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linearmore » growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.« less

  20. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  1. ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations

    NASA Astrophysics Data System (ADS)

    Freitag, Marc Dewi

    2013-02-01

    ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).

  2. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  3. Investigation of energetic particle induced geodesic acoustic mode

    NASA Astrophysics Data System (ADS)

    Schneller, Mirjam; Fu, Guoyong; Chavdarovski, Ilija; Wang, Weixing; Lauber, Philipp; Lu, Zhixin

    2017-10-01

    Energetic particles are ubiquitous in present and future tokamaks due to heating systems and fusion reactions. Anisotropy in the distribution function of the energetic particle population is able to excite oscillations from the continuous spectrum of geodesic acoustic modes (GAMs), which cannot be driven by plasma pressure gradients due to their toroidally and nearly poloidally symmetric structures. These oscillations are known as energetic particle-induced geodesic acoustic modes (EGAMs) [G.Y. Fu'08] and have been observed in recent experiments [R. Nazikian'08]. EGAMs are particularly attractive in the framework of turbulence regulation, since they lead to an oscillatory radial electric shear which can potentially saturate the turbulence. For the presented work, the nonlinear gyrokinetic, electrostatic, particle-in-cell code GTS [W.X. Wang'06] has been extended to include an energetic particle population following either bump-on-tail Maxwellian or slowing-down [Stix'76] distribution function. With this new tool, we study growth rate, frequency and mode structure of the EGAM in an ASDEX Upgrade-like scenario. A detailed understanding of EGAM excitation reveals essential for future studies of EGAM interaction with micro-turbulence. Funded by the Max Planck Princeton Research Center. Computational resources of MPCDF and NERSC are greatefully acknowledged.

  4. Swelling of two-dimensional polymer rings by trapped particles.

    PubMed

    Haleva, E; Diamant, H

    2006-09-01

    The mean area of a two-dimensional Gaussian ring of N monomers is known to diverge when the ring is subject to a critical pressure differential, p c ~ N -1. In a recent publication (Eur. Phys. J. E 19, 461 (2006)) we have shown that for an inextensible freely jointed ring this divergence turns into a second-order transition from a crumpled state, where the mean area scales as [A]~N-1, to a smooth state with [A]~N(2). In the current work we extend these two models to the case where the swelling of the ring is caused by trapped ideal-gas particles. The Gaussian model is solved exactly, and the freely jointed one is treated using a Flory argument, mean-field theory, and Monte Carlo simulations. For a fixed number Q of trapped particles the criticality disappears in both models through an unusual mechanism, arising from the absence of an area constraint. In the Gaussian case the ring swells to such a mean area, [A]~ NQ, that the pressure exerted by the particles is at p c for any Q. In the freely jointed model the mean area is such that the particle pressure is always higher than p c, and [A] consequently follows a single scaling law, [A]~N(2) f (Q/N), for any Q. By contrast, when the particles are in contact with a reservoir of fixed chemical potential, the criticality is retained. Thus, the two ensembles are manifestly inequivalent in these systems.

  5. Interaction between high harmonic fast waves and fast ions in NSTX/NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Bertelli, N.; Valeo, E. J.; Gorelenkova, M.; Green, D. L.; RF SciDAC Team

    2016-10-01

    Fast wave (FW) heating in the ion cyclotron range of frequency (ICRF) has been successfully used to sustain and control the fusion plasma performance, and it will likely play an important role in the ITER experiment. As demonstrated in the NSTX and DIII-D experiments the interactions between fast waves and fast ions can be so strong to significantly modify the fast ion population from neutral beam injection. In fact, it has been recently found in NSTX that FWs can modify and, under certain conditions, even suppress the energetic particle driven instabilities, such as toroidal Alfvén eigenmodes and global Alfvén eigenmodes and fishbones. This paper examines such interactions in NSTX/NSTX-U plasmas by using the recent extension of the RF full-wave code TORIC to include non-Maxwellian ions distribution functions. Particular attention is given to the evolution of the fast ions distribution function w/ and w/o RF. Tests on the RF kick-operator implemented in the Monte-Carlo particle code NUBEAM is also discussed in order to move towards a self consistent evaluation of the RF wave-field and the ion distribution functions in the TRANSP code. Work supported by US DOE Contract DE-AC02-09CH11466.

  6. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  7. The method of unitary clothing transformations in the theory of nucleon-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Dubovyk, I.; Shebeko, A.

    2010-04-01

    The clothing procedure, put forward in quantum field theory (QFT) by Greenberg and Schweber, is applied for the description of nucleon-nucleon (N -N) scattering. We consider pseudoscalar (π and η), vector (ρ and ω) and scalar (δ and σ) meson fields interacting with 1/2 spin (N and N) fermion ones via the Yukawa-type couplings to introduce trial interactions between “bare” particles. The subsequent unitary clothing transformations (UCTs) are found to express the total Hamiltonian through new interaction operators that refer to particles with physical (observable) properties, the so-called clothed particles. In this work, we are focused upon the Hermitian and energy-independent operators for the clothed nucleons, being built up in the second order in the coupling constants. The corresponding analytic expressions in momentum space are compared with the separate meson contributions to the one-boson-exchange potentials in the meson theory of nuclear forces. In order to evaluate the T matrix of the N-N scattering we have used an equivalence theorem that enables us to operate in the clothed particle representation (CPR) instead of the bare particle representation (BPR) with its huge amount of virtual processes. We have derived the Lippmann-Schwinger(LS)-type equation for the CPR elements of the T-matrix for a given collision energy in the two-nucleon sector of the Hilbert space H of hadronic states and elaborated a code for its numerical solution in momentum space.

  8. Proton irradiation on materials

    NASA Technical Reports Server (NTRS)

    Chang, C. Ken

    1993-01-01

    A computer code is developed by utilizing a radiation transport code developed at NASA Langley Research Center to study the proton radiation effects on materials which have potential application in NASA's future space missions. The code covers the proton energy from 0.01 Mev to 100 Gev and is sufficient for energetic protons encountered in both low earth and geosynchronous orbits. With some modification, the code can be extended for particles heavier than proton as the radiation source. The code is capable of calculating the range, stopping power, exit energy, energy deposition coefficients, dose, and cumulative dose along the path of the proton in a target material. The target material can be any combination of the elements with atomic number ranging from 1 to 92, or any compound with known chemical composition. The generated cross section for a material is stored and is reused in future to save computer time. This information can be utilized to calculate the proton dose a material would receive in an orbit when the radiation environment is known. It can also be used to determine, in the laboratory, the parameters such as beam current of proton and irradiation time to attain the desired dosage for accelerated ground testing of any material. It is hoped that the present work be extended to include polymeric and composite materials which are prime candidates for use as coating, electronic components, and structure building. It is also desirable to determine, for ground testing these materials, the laboratory parameters in order to simulate the dose they would receive in space environments. A sample print-out for water subject to 1.5 Mev proton is included as a reference.

  9. Hybrid-view programming of nuclear fusion simulation code in the PGAS parallel programming language XcalableMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi

    Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less

  10. Hybrid-view programming of nuclear fusion simulation code in the PGAS parallel programming language XcalableMP

    DOE PAGES

    Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...

    2016-06-01

    Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less

  11. Global simulation of edge pedestal micro-instabilities

    NASA Astrophysics Data System (ADS)

    Wan, Weigang; Parker, Scott; Chen, Yang

    2011-10-01

    We study micro turbulence of the tokamak edge pedestal with global gyrokinetic particle simulations. The simulation code GEM is an electromagnetic δf code. Two sets of DIII-D experimental profiles, shot #131997 and shot #136051 are used. The dominant instabilities appear to be two kinds of modes both propagating in the electron diamagnetic direction, with comparable linear growth rates. The low n mode is at the Alfven frequency range and driven by density and ion temperature gradients. The high n mode is driven by electron temperature gradient and has a low real frequency. A β scan shows that the low n mode is electromagnetic. Frequency analysis shows that the high n mode is sometimes mixed with an ion instability. Experimental radial electric field is applied and its effects studied. We will also show some preliminary nonlinear results. We thank R. Groebner, P. Snyder and Y. Zheng for providing experimental profiles and helpful discussions.

  12. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes

    NASA Astrophysics Data System (ADS)

    Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.

  13. Tempest Neoclassical Simulation of Fusion Edge Plasmas

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M.; Hittinger, J.; Kerbel, G. D.; Nevins, W. M.; Rognlien, T. D.

    2006-04-01

    We are developing a continuum gyrokinetic full-F code, TEMPEST, to simulate edge plasmas. The geometry is that of a fully diverted tokamak and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The code, presently 4-dimensional (2D2V), includes kinetic ions and electrons, a gyrokinetic Poisson solver for electric field, and the nonlinear Fokker-Planck collision operator. Here we present the simulation results of neoclassical transport with Boltzmann electrons. In a large aspect ratio circular geometry, excellent agreement is found for neoclassical equilibrium with parallel flows in the banana regime without a temperature gradient. In divertor geometry, it is found that the endloss of particles and energy induces pedestal-like density and temperature profiles inside the magnetic separatrix and parallel flow stronger than the neoclassical predictions in the SOL. The impact of the X-point divertor geometry on the self-consistent electric field and geo-acoustic oscillations will be reported. We will also discuss the status of extending TEMPEST into a 5-D code.

  14. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.

    2010-11-01

    The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.

  15. Extension to Higher Mass Numbers of an Improved Knockout-Ablation-Coalescence Model for Secondary Neutron and Light Ion Production in Cosmic Ray Interactions

    NASA Astrophysics Data System (ADS)

    Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.

    Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.

  16. Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body

    NASA Astrophysics Data System (ADS)

    Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer

    2016-12-01

    We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.

  17. Growth process of hydrogenated amorphous carbon films synthesized by atmospheric pressure plasma enhanced CVD using nitrogen and helium as a dilution gas

    NASA Astrophysics Data System (ADS)

    Mori, Takanori; Sakurai, Takachika; Sato, Taiki; Shirakura, Akira; Suzuki, Tetsuya

    2016-04-01

    Hydrogenated amorphous carbon films with various thicknesses were synthesized by dielectric barrier discharge-based plasma deposition under atmospheric pressure diluted with nitrogen (N2) and helium (He) at various pulse frequencies. The C2H2/N2 film showed cauliflower-like-particles that grew bigger with the increase in film’s thickness. At 5 kHz, the film with a thickness of 2.7 µm and smooth surface was synthesized. On the other hand, the films synthesized from C2H2/He had a smooth surface and was densely packed with domed particles. The domed particles extended with the increase in the film thickness, enabling it to grow successfully to 37 µm with a smooth surface.

  18. Addressing Fission Product Validation in MCNP Burnup Credit Criticality Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Bowen, Douglas G; Marshall, William BJ J

    2015-01-01

    The US Nuclear Regulatory Commission (NRC) Division of Spent Fuel Storage and Transportation issued Interim Staff Guidance (ISG) 8, Revision 3 in September 2012. This ISG provides guidance for NRC staff members’ review of burnup credit (BUC) analyses supporting transport and dry storage of pressurized water reactor spent nuclear fuel (SNF) in casks. The ISG includes guidance for addressing validation of criticality (k eff) calculations crediting the presence of a limited set of fission products and minor actinides (FP&MAs). Based on previous work documented in NRC Regulatory Guide (NUREG) Contractor Report (CR)-7109, the ISG recommends that NRC staff members acceptmore » the use of either 1.5 or 3% of the FP&MA worth—in addition to bias and bias uncertainty resulting from validation of k eff calculations for the major actinides in SNF—to conservatively account for the bias and bias uncertainty associated with the specified unvalidated FP&MAs. The ISG recommends (1) use of 1.5% of the FP&MA worth if a modern version of SCALE and its nuclear data are used and (2) 3% of the FP&MA worth for well qualified, industry standard code systems other than SCALE with the Evaluated Nuclear Data Files, Part B (ENDF/B),-V, ENDF/B-VI, or ENDF/B-VII cross sections libraries. The work presented in this paper provides a basis for extending the use of the 1.5% of the FP&MA worth bias to BUC criticality calculations performed using the Monte Carlo N-Particle (MCNP) code. The extended use of the 1.5% FP&MA worth bias is shown to be acceptable by comparison of FP&MA worths calculated using SCALE and MCNP with ENDF/B-V, -VI, and -VII–based nuclear data. The comparison supports use of the 1.5% FP&MA worth bias when the MCNP code is used for criticality calculations, provided that the cask design is similar to the hypothetical generic BUC-32 cask model and that the credited FP&MA worth is no more than 0.1 Δk eff (ISG-8, Rev. 3, Recommendation 4).« less

  19. Fast Multipole Methods for Three-Dimensional N-body Problems

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.

    1995-01-01

    We are developing computational tools for the simulations of three-dimensional flows past bodies undergoing arbitrary motions. High resolution viscous vortex methods have been developed that allow for extended simulations of two-dimensional configurations such as vortex generators. Our objective is to extend this methodology to three dimensions and develop a robust computational scheme for the simulation of such flows. A fundamental issue in the use of vortex methods is the ability of employing efficiently large numbers of computational elements to resolve the large range of scales that exist in complex flows. The traditional cost of the method scales as Omicron (N(sup 2)) as the N computational elements/particles induce velocities at each other, making the method unacceptable for simulations involving more than a few tens of thousands of particles. In the last decade fast methods have been developed that have operation counts of Omicron (N log N) or Omicron (N) (referred to as BH and GR respectively) depending on the details of the algorithm. These methods are based on the observation that the effect of a cluster of particles at a certain distance may be approximated by a finite series expansion. In order to exploit this observation we need to decompose the element population spatially into clusters of particles and build a hierarchy of clusters (a tree data structure) - smaller neighboring clusters combine to form a cluster of the next size up in the hierarchy and so on. This hierarchy of clusters allows one to determine efficiently when the approximation is valid. This algorithm is an N-body solver that appears in many fields of engineering and science. Some examples of its diverse use are in astrophysics, molecular dynamics, micro-magnetics, boundary element simulations of electromagnetic problems, and computer animation. More recently these N-body solvers have been implemented and applied in simulations involving vortex methods. Koumoutsakos and Leonard (1995) implemented the GR scheme in two dimensions for vector computer architectures allowing for simulations of bluff body flows using millions of particles. Winckelmans presented three-dimensional, viscous simulations of interacting vortex rings, using vortons and an implementation of a BH scheme for parallel computer architectures. Bhatt presented a vortex filament method to perform inviscid vortex ring interactions, with an alternative implementation of a BH scheme for a Connection Machine parallel computer architecture.

  20. Estimation of relative biological effectiveness for boron neutron capture therapy using the PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Horiguchi, Hironori; Sato, Tatsuhiko; Kumada, Hiroaki; Yamamoto, Tetsuya; Sakae, Takeji

    2015-03-01

    The absorbed doses deposited by boron neutron capture therapy (BNCT) can be categorized into four components: α and (7)Li particles from the (10)B(n, α)(7)Li reaction, 0.54-MeV protons from the (14)N(n, p)(14)C reaction, the recoiled protons from the (1)H(n, n) (1)H reaction, and photons from the neutron beam and (1)H(n, γ)(2)H reaction. For evaluating the irradiation effect in tumors and the surrounding normal tissues in BNCT, it is of great importance to estimate the relative biological effectiveness (RBE) for each dose component in the same framework. We have, therefore, established a new method for estimating the RBE of all BNCT dose components on the basis of the microdosimetric kinetic model. This method employs the probability density of lineal energy, y, in a subcellular structure as the index for expressing RBE, which can be calculated using the microdosimetric function implemented in the particle transport simulation code (PHITS). The accuracy of this method was tested by comparing the calculated RBE values with corresponding measured data in a water phantom irradiated with an epithermal neutron beam. The calculation technique developed in this study will be useful for biological dose estimation in treatment planning for BNCT. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  1. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    DOE PAGES

    Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; ...

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark mattermore » particles with elastic spin-independent interactions and neutron to proton coupling ratio f n/f p=-0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f n/f p=-0.8.« less

  2. Quantitative NDA measurements of advanced reprocessing product materials containing uranium, neptunium, plutonium, and americium

    NASA Astrophysics Data System (ADS)

    Goddard, Braden

    The ability of inspection agencies and facility operators to measure powders containing several actinides is increasingly necessary as new reprocessing techniques and fuel forms are being developed. These powders are difficult to measure with nondestructive assay (NDA) techniques because neutrons emitted from induced and spontaneous fission of different nuclides are very similar. A neutron multiplicity technique based on first principle methods was developed to measure these powders by exploiting isotope-specific nuclear properties, such as the energy-dependent fission cross sections and the neutron induced fission neutron multiplicity. This technique was tested through extensive simulations using the Monte Carlo N-Particle eXtended (MCNPX) code and by one measurement campaign using the Active Well Coincidence Counter (AWCC) and two measurement campaigns using the Epithermal Neutron Multiplicity Counter (ENMC) with various (alpha,n) sources and actinide materials. Four potential applications of this first principle technique have been identified: (1) quantitative measurement of uranium, neptunium, plutonium, and americium materials; (2) quantitative measurement of mixed oxide (MOX) materials; (3) quantitative measurement of uranium materials; and (4) weapons verification in arms control agreements. This technique still has several challenges which need to be overcome, the largest of these being the challenge of having high-precision active and passive measurements to produce results with acceptably small uncertainties.

  3. Evaluation of Am–Li neutron spectra data for active well type neutron multiplicity measurements of uranium

    DOE PAGES

    Goddard, Braden; Croft, Stephen; Lousteau, Angela; ...

    2016-05-25

    Safeguarding nuclear material is an important and challenging task for the international community. One particular safeguards technique commonly used for uranium assay is active neutron correlation counting. This technique involves irradiating unused uranium with ( α,n) neutrons from an Am-Li source and recording the resultant neutron pulse signal which includes induced fission neutrons. Although this non-destructive technique is widely employed in safeguards applications, the neutron energy spectra from an Am-Li sources is not well known. Several measurements over the past few decades have been made to characterize this spectrum; however, little work has been done comparing the measured spectra ofmore » various Am-Li sources to each other. This paper examines fourteen different Am-Li spectra, focusing on how these spectra affect simulated neutron multiplicity results using the code Monte Carlo N-Particle eXtended (MCNPX). Two measurement and simulation campaigns were completed using Active Well Coincidence Counter (AWCC) detectors and uranium standards of varying enrichment. The results of this work indicate that for standard AWCC measurements, the fourteen Am-Li spectra produce similar doubles and triples count rates. Finally, the singles count rates varied by as much as 20% between the different spectra, although they are usually not used in quantitative analysis.« less

  4. Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas

    NASA Astrophysics Data System (ADS)

    Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.

  5. Investigation of Ionospheric Disturbances

    DTIC Science & Technology

    1977-01-28

    Heikkila, D.M. Klumpar, J.D. Winningham, U. Fahleson, C.G. Falthammar, and A. Pederson ; "Rocket-Borne Particle, Field and Plasma Observations in the...S Arvny Sit ti-im Agvi- y ATTN: Code 7709, Withah A] l ITN: T1- hllhii ],it rl r"y AT’IN: ..cIdv 770. Klaun. I|i,,in ATTN: Code 7750, f. I -,Ihhr thi...ATTN: J. F. Frii,htaianiChto Rl -I1I9f R I, 1) Ass-lat , s ATTN: IH. II. loflloway, R1-20(16 I’N: Herbert .. Mitchell United Techrnilogips Corporal Ion

  6. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  7. Bench-level characterization of a CMOS standard-cell D-latch using alpha-particle sensitive test circuits

    NASA Technical Reports Server (NTRS)

    Blaes, B. R.; Soli, G. A.; Buehler, M. G.

    1991-01-01

    A methodology is described for predicting the SEU susceptibility of a standard-cell D-latch using an alpha-particle sensitive SRAM, SPICE critical charge simulation results, and alpha-particle interaction physics. Measurements were made on a 1.6-micron n-well CMOS 4-kb test SRAM irradiated with an Am-241 alpha-particle source. A collection depth of 6.09 micron was determined using these results and TRIM computer code. Using this collection depth and SPICE derived critical charge results on the latch design, an LET threshold of 34 MeV sq cm/mg was predicted. Heavy ion tests were then performed on the latch and an LET threshold of 41 MeV sq cm/mg was determined.

  8. Modeling emission lag after photoexcitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei

    A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less

  9. Modeling emission lag after photoexcitation

    DOE PAGES

    Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei; ...

    2017-10-28

    A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less

  10. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  11. On the Evolution of the Standard Genetic Code: Vestiges of Critical Scale Invariance from the RNA World in Current Prokaryote Genomes

    PubMed Central

    José, Marco V.; Govezensky, Tzipe; García, José A.; Bobadilla, Juan R.

    2009-01-01

    Herein two genetic codes from which the primeval RNA code could have originated the standard genetic code (SGC) are derived. One of them, called extended RNA code type I, consists of all codons of the type RNY (purine-any base-pyrimidine) plus codons obtained by considering the RNA code but in the second (NYR type) and third (YRN type) reading frames. The extended RNA code type II, comprises all codons of the type RNY plus codons that arise from transversions of the RNA code in the first (YNY type) and third (RNR) nucleotide bases. In order to test if putative nucleotide sequences in the RNA World and in both extended RNA codes, share the same scaling and statistical properties to those encountered in current prokaryotes, we used the genomes of four Eubacteria and three Archaeas. For each prokaryote, we obtained their respective genomes obeying the RNA code or the extended RNA codes types I and II. In each case, we estimated the scaling properties of triplet sequences via a renormalization group approach, and we calculated the frequency distributions of distances for each codon. Remarkably, the scaling properties of the distance series of some codons from the RNA code and most codons from both extended RNA codes turned out to be identical or very close to the scaling properties of codons of the SGC. To test for the robustness of these results, we show, via computer simulation experiments, that random mutations of current genomes, at the rates of 10−10 per site per year during three billions of years, were not enough for destroying the observed patterns. Therefore, we conclude that most current prokaryotes may still contain relics of the primeval RNA World and that both extended RNA codes may well represent two plausible evolutionary paths between the RNA code and the current SGC. PMID:19183813

  12. Improvement of Mishchenko's T-matrix code for absorbing particles.

    PubMed

    Moroz, Alexander

    2005-06-10

    The use of Gaussian elimination with backsubstitution for matrix inversion in scattering theories is discussed. Within the framework of the T-matrix method (the state-of-the-art code by Mishchenko is freely available at http://www.giss.nasa.gov/-crmim), it is shown that the domain of applicability of Mishchenko's FORTRAN 77 (F77) code can be substantially expanded in the direction of strongly absorbing particles where the current code fails to converge. Such an extension is especially important if the code is to be used in nanoplasmonic or nanophotonic applications involving metallic particles. At the same time, convergence can also be achieved for large nonabsorbing particles, in which case the non-Numerical Algorithms Group option of Mishchenko's code diverges. Computer F77 implementation of Mishchenko's code supplemented with Gaussian elimination with backsubstitution is freely available at http://www.wave-scattering.com.

  13. An update on the BQCD Hybrid Monte Carlo program

    NASA Astrophysics Data System (ADS)

    Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk

    2018-03-01

    We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.

  14. The Arrow of Time in the Collapse of Collisionless Self-gravitating Systems: Non-validity of the Vlasov-Poisson Equation during Violent Relaxation

    NASA Astrophysics Data System (ADS)

    Beraldo e Silva, Leandro; de Siqueira Pedra, Walter; Sodré, Laerte; Perico, Eder L. D.; Lima, Marcos

    2017-09-01

    The collapse of a collisionless self-gravitating system, with the fast achievement of a quasi-stationary state, is driven by violent relaxation, with a typical particle interacting with the time-changing collective potential. It is traditionally assumed that this evolution is governed by the Vlasov-Poisson equation, in which case entropy must be conserved. We run N-body simulations of isolated self-gravitating systems, using three simulation codes, NBODY-6 (direct summation without softening), NBODY-2 (direct summation with softening), and GADGET-2 (tree code with softening), for different numbers of particles and initial conditions. At each snapshot, we estimate the Shannon entropy of the distribution function with three different techniques: Kernel, Nearest Neighbor, and EnBiD. For all simulation codes and estimators, the entropy evolution converges to the same limit as N increases. During violent relaxation, the entropy has a fast increase followed by damping oscillations, indicating that violent relaxation must be described by a kinetic equation other than the Vlasov-Poisson equation, even for N as large as that of astronomical structures. This indicates that violent relaxation cannot be described by a time-reversible equation, shedding some light on the so-called “fundamental paradox of stellar dynamics.” The long-term evolution is well-described by the orbit-averaged Fokker-Planck model, with Coulomb logarithm values in the expected range 10{--}12. By means of NBODY-2, we also study the dependence of the two-body relaxation timescale on the softening length. The approach presented in the current work can potentially provide a general method for testing any kinetic equation intended to describe the macroscopic evolution of N-body systems.

  15. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.

    PubMed

    Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  16. DarkBit: a GAMBIT module for computing dark matter observables and likelihoods

    NASA Astrophysics Data System (ADS)

    Bringmann, Torsten; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Kahlhoefer, Felix; Kvellestad, Anders; Putze, Antje; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Wild, Sebastian

    2017-12-01

    We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments ( gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments ( DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool ( GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes ( DarkSUSY and micrOMEGAs), and application of DarkBit 's advanced direct and indirect detection routines to a simple effective dark matter model.

  17. NSR&D FY17 Report: CartaBlanca Capability Enhancements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Christopher Curtis; Dhakal, Tilak Raj; Zhang, Duan Zhong

    Over the last several years, particle technology in the CartaBlanca code has been matured and has been successfully applied to a wide variety of physical problems. It has been shown that the particle methods, especially Los Alamos's dual domain material point method, is capable of computing many problems involves complex physics, chemistries accompanied by large material deformations, where the traditional finite element or Eulerian method encounter significant difficulties. In FY17, the CartaBlanca code has been enhanced with physical models and numerical algorithms. We started out to compute penetration and HE safety problems. Most of the year we focused on themore » TEPLA model improvement testing against the sweeping wave experiment by Gray et al., because it was found that pore growth and material failure are essentially important for our tasks and needed to be understood for modeling the penetration and the can experiments efficiently. We extended the TEPLA mode from the point view of ensemble phase average to include the effects of nite deformation. It is shown that the assumed pore growth model in TEPLA is actually an exact result from the theory. Alone this line, we then generalized the model to include finite deformations to consider nonlinear dynamics of large deformation. The interaction between the HE product gas and the solid metal is based on the multi-velocity formation. Our preliminary numerical results suggest good agreement between the experiment and the numerical results, pending further verification. To improve the parallel processing capabilities of the CartaBlanca code, we are actively working with the Next Generation Code (NGC) project to rewrite selected packages using C++. This work is expected to continue in the following years. This effort also makes the particle technology developed with CartaBlanca project available to other part of the laboratory. Working with the NGC project and rewriting some parts of the code also given us an opportunity to improve our numerical implementations of the method and to take advantage of recently advances in the numerical methods, such as multiscale algorithms.« less

  18. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  19. Code C# for chaos analysis of relativistic many-body systems with reactions

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.

    2012-04-01

    In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Treatment of two particles reactions and decays. For each particle, calculation of the time measured in the particle reference frame, according to the instantaneous velocity. Possibility to dynamically add particle properties (spin, isospin, etc.), and reactions/decays, using a specific XML input file. Basic support for Monte Carlo simulations. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, “clusterization map”, and energy conservation precision test. As an example of use, we implemented a toy-model for nuclear relativistic collisions at 4.5 A GeV/c. Reasons for new version: Following our goal of applying chaos theory to nuclear relativistic collisions at 4.5 A GeV/c, we developed a reaction module integrated with the Chaos Many-Body Engine. In the previous version, inheriting the Particle class was the only possibility of implementing more particle properties (spin, isospin, and so on). In the new version, particle properties can be dynamically added using a dictionary object. The application was improved in order to calculate the time measured in the own reference frame of each particle. two particles reactions: a+b→c+d, decays: a→c+d, stimulated decays, more complicated schemas, implemented as various combinations of previous reactions. Following our goal of creating a flexible application, the reactions list, including the corresponding properties (cross sections, particles lifetime, etc.), could be supplied as parameter, using a specific XML configuration file. The simulation output files were modified for systems with reactions, assuring also the backward compatibility. We propose the “Clusterization Map” as a new investigation method of many-body systems. The multi-dimensional Lyapunov Exponent was adapted in order to be used for systems with variable structure. Basic support for Monte Carlo simulations was also added. Additional comments: Windows forms application for testing the engine. Easy copy/paste based deployment method. Running time: Quadratic complexity.

  20. Extended Thomas-Fermi density functional for the unitary Fermi gas

    NASA Astrophysics Data System (ADS)

    Salasnich, Luca; Toigo, Flavio

    2008-11-01

    We determine the energy density ξ(3/5)nɛF and the gradient correction λℏ2(∇n)2/(8mn) of the extended Thomas-Fermi (ETF) density functional, where n is the number density and ɛF is the Fermi energy, for a trapped two-component Fermi gas with infinite scattering length (unitary Fermi gas) on the basis of recent diffusion Monte Carlo (DMC) calculations [Phys. Rev. Lett. 99, 233201 (2007)]. In particular we find that ξ=0.455 and λ=0.13 give the best fit of the DMC data with an even number N of particles. We also study the odd-even splitting γN1/9ℏω of the ground-state energy for the unitary gas in a harmonic trap of frequency ω determining the constant γ . Finally we investigate the effect of the gradient term in the time-dependent ETF model by introducing generalized Galilei-invariant hydrodynamics equations.

  1. Spatial separation and entanglement of identical particles

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; di Martino, Sara; Facchi, Paolo; Florio, Giuseppe

    2014-04-01

    We reconsider the effect of indistinguishability on the reduced density operator of the internal degrees of freedom (tracing out the spatial degrees of freedom) for a quantum system composed of identical particles located in different spatial regions. We explicitly show that if the spin measurements are performed in disjoint spatial regions then there are no constraints on the structure of the reduced state of the system. This implies that the statistics of identical particles has no role from the point of view of separability and entanglement when the measurements are spatially separated. We extend the treatment to the case of n particles and show the connection with some recent criteria for separability based on subalgebras of observables.

  2. Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.

    2005-09-01

    We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in the same quantities for gas of lower overdensity. For example, at z=3 the fraction of cosmic gas that has temperature logT>0.5 is ~80% for both Enzo ZEUS and GADGET, while it is 40%-60% for Enzo PPM. We argue that these discrepancies are due to differences in the shock-capturing abilities of the different methods. In particular, we find that the ZEUS implementation of artificial viscosity in Enzo leads to some unphysical heating at early times in preshock regions. While this is apparently a significantly weaker effect in GADGET, its use of an artificial viscosity technique may also make it prone to some excess generation of entropy that should be absent in Enzo PPM. Overall, the hydrodynamical results for GADGET are bracketed by those for Enzo ZEUS and Enzo PPM but are closer to Enzo ZEUS.

  3. SU(N ) fermions in a one-dimensional harmonic trap

    NASA Astrophysics Data System (ADS)

    Laird, E. K.; Shi, Z.-Y.; Parish, M. M.; Levinsen, J.

    2017-09-01

    We conduct a theoretical study of SU (N ) fermions confined by a one-dimensional harmonic potential. First, we introduce a numerical approach for solving the trapped interacting few-body problem, by which one may obtain accurate energy spectra across the full range of interaction strengths. In the strong-coupling limit, we map the SU (N ) Hamiltonian to a spin-chain model. We then show that an existing, extremely accurate ansatz—derived for a Heisenberg SU(2) spin chain—is extendable to these N -component systems. Lastly, we consider balanced SU (N ) Fermi gases that have an equal number of particles in each spin state for N =2 ,3 ,4 . In the weak- and strong-coupling regimes, we find that the ground-state energies rapidly converge to their expected values in the thermodynamic limit with increasing atom number. This suggests that the many-body energetics of N -component fermions may be accurately inferred from the corresponding few-body systems of N distinguishable particles.

  4. Novel exon 1 protein-coding regions N-terminally extend human KCNE3 and KCNE4.

    PubMed

    Abbott, Geoffrey W

    2016-08-01

    The 5 human (h)KCNE β subunits each regulate various cation channels and are linked to inherited cardiac arrhythmias. Reported here are previously undiscovered protein-coding regions in exon 1 of hKCNE3 and hKCNE4 that extend their encoded extracellular domains by 44 and 51 residues, which yields full-length proteins of 147 and 221 residues, respectively. Full-length hKCNE3 and hKCNE4 transcript and protein are expressed in multiple human tissues; for hKCNE4, only the longer protein isoform is detectable. Two-electrode voltage-clamp electrophysiology revealed that, when coexpressed in Xenopus laevis oocytes with various potassium channels, the newly discovered segment preserved conversion of KCNQ1 by hKCNE3 to a constitutively open channel, but prevented its inhibition of Kv4.2 and KCNQ4. hKCNE4 slowing of Kv4.2 inactivation and positive-shifted steady-state inactivation were also preserved in the longer form. In contrast, full-length hKCNE4 inhibition of KCNQ1 was limited to 40% at +40 mV vs. 80% inhibition by the shorter form, and augmentation of KCNQ4 activity by hKCNE4 was entirely abolished by the additional segment. Among the genome databases analyzed, the longer KCNE3 is confined to primates; full-length KCNE4 is widespread in vertebrates but is notably absent from Mus musculus Findings highlight unexpected KCNE gene diversity, raise the possibility of dynamic regulation of KCNE partner modulation via splice variation, and suggest that the longer hKCNE3 and hKCNE4 proteins should be adopted in future mechanistic and genetic screening studies.-Abbott, G. W. Novel exon 1 protein-coding regions N-terminally extend human KCNE3 and KCNE4. © FASEB.

  5. Efficient Modeling of Laser-Plasma Accelerators with INF and RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-11-04

    The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  6. Estimate of S-values for children due to six positron emitting radionuclides used in PET examinations

    NASA Astrophysics Data System (ADS)

    Belinato, Walmir; Santos, William S.; Perini, Ana P.; Neves, Lucio P.; Caldas, Linda V. E.; Souza, Divanizia N.

    2017-11-01

    Positron emission tomography (PET) has revolutionized the diagnosis of cancer since its conception. When combined with computed tomography (CT), PET/CT performed in children produces highly accurate diagnoses from images of regions affected by malignant tumors. Considering the high risk to children when exposed to ionizing radiation, a dosimetric study for PET/CT procedures is necessary. Specific absorbed fractions (SAF) were determined for monoenergetic photons and positrons, as well as the S-values for six positron emitting radionuclides (11C, 13N, 18F, 68Ga, 82Rb, 15O), and 22 source organs. The study was performed for six pediatric anthropomorphic hybrid models, including the newborn and 1 year hermaphrodite, 5 and 10-year-old male and female, using the Monte Carlo N-Particle eXtended code (MCNPX, version 2.7.0). The results of the SAF in source organs and S-values for all organs showed to be inversely related to the age of the phantoms, which includes the variation of body weight. The results also showed that radionuclides with higher energy peak emission produces larger auto absorbed S-values due to local dose deposition by positron decay. The S-values for the source organs are considerably larger due to the interaction of tissue with non-penetrating particles (electrons and positrons) and present a linear relationship with the phantom body masses. The results of the S-values determined for positron-emitting radionuclides can be used to assess the radiation dose delivered to pediatric patients subjected to PET examination in clinical settings. The novelty of this work is associated with the determination of auto absorbed S-values, in six new pediatric virtual anthropomorphic phantoms, for six emitting positrons, commonly employed in PET exams.

  7. Megaquakes, prograde surface waves and urban evolution

    NASA Astrophysics Data System (ADS)

    Lomnitz, C.; Castaños, H.

    2013-05-01

    Cities grow according to evolutionary principles. They move away from soft-ground conditions and avoid vulnerable types of structures. A megaquake generates prograde surface waves that produce unexpected damage in modern buildings. The examples (Figs. 1 and 2) were taken from the 1985 Mexico City and the 2010 Concepción, Chile megaquakes. About 400 structures built under supervision according to modern building codes were destroyed in the Mexican earthquake. All were sited on soft ground. A Rayleigh wave will cause surface particles to move as ellipses in a vertical plane. Building codes assume that this motion will be retrograde as on a homogeneous elastic halfspace, but soft soils are intermediate materials between a solid and a liquid. When Poisson's ratio tends to ν→0.5 the particle motion turns prograde as it would on a homogeneous fluid halfspace. Building codes assume that the tilt of the ground is not in phase with the acceleration but we show that structures on soft ground tilt into the direction of the horizontal ground acceleration. The combined effect of gravity and acceleration may destabilize a structure when it is in resonance with its eigenfrequency. Castaños, H. and C. Lomnitz, 2013. Charles Darwin and the 1835 Chile earthquake. Seismol. Res. Lett., 84, 19-23. Lomnitz, C., 1990. Mexico 1985: the case for gravity waves. Geophys. J. Int., 102, 569-572. Malischewsky, P.G. et al., 2008. The domain of existence of prograde Rayleigh-wave particle motion. Wave Motion 45, 556-564.; Figure 1 1985 Mexico megaquake--overturned 15-story apartment building in Mexico City ; Figure 2 2010 Chile megaquake Overturned 15-story R-C apartment building in Concepción

  8. Reconstruction of recycling flux from synthetic camera images, evaluated for the Wendelstein 7-X startup limiter

    NASA Astrophysics Data System (ADS)

    Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team

    2017-12-01

    The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\

  9. Experiment-theory comparison for low frequency BAE modes in the strongly shaped H-1NF stellarator

    DOE PAGES

    Haskey, S. R.; Blackwell, B. D.; Nuhrenberg, C.; ...

    2015-08-12

    Here, recent advances in the modeling, analysis, and measurement of fluctuations have significantly improved the diagnosis and understanding of Alfvén eigenmodes in the strongly shaped H-1NF helical axis stellarator. Experimental measurements, including 3D tomographic inversions of high resolution visible light images, are in close agreement with beta-induced Alfvén eigenmodes (BAEs) calculated using the compressible ideal MHD code, CAS3D. This is despite the low β in H-1NF, providing experimental evidence that these modes can exist due to compression that is induced by the strong shaping in stellarators, in addition to high β, as is the case in tokamaks. This is confirmedmore » using the CONTI and CAS3D codes, which show significant gap structures at lower frequencies which contain BAE and beta-acoustic Alfvén eigenmodes (BAAEs). The BAEs are excited in the absence of a well confined energetic particle source, further confirming previous studies that thermal particles, electrons, or even radiation fluctuations can drive these modes. Datamining of magnetic probe data shows the experimentally measured frequency of these modes has a clear dependence on the rotational transform profile, which is consistent with a frequency dependency due to postulated confinement related temperature variations.« less

  10. Multi-Scale Simulation of Interfacial Phenomena and Nano-Particle Placement in Polymer Matrix Composites

    DTIC Science & Technology

    2012-08-01

    Molecular Dynamics Simulations Coarse-Grain Particle Dynamics Simulations Local structure; Force field parameterization Extended structure...K) C8H18 C12H26 C16H34 Adhesive forces can cause local density gradients and defects " Pronounced layering of polymer near interfaces...reactive end groups (CnH2n+1S) on Cu Gap SubPc on C60 Pentacene on a-SiO2 Cyclopentene on Au Crystalline CuPc on Al Polyimide on Si

  11. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  12. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  13. 78 FR 21006 - Consolidated Rail Corporation, CSX Transportation, Inc., and Norfolk Southern Railway Company...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-08

    ... part 1152 subpart F--Exempt Abandonments and Discontinuances of Service for each carrier to discontinue service over an approximately 2.23-mile line of railroad extending from milepost 0.77 to milepost 3.00 in Middlesex County, N.J. The line traverses United States Postal Service Zip Codes 08901, 08903, and 08906...

  14. 78 FR 21006 - Consolidated Rail Corporation, CSX Transportation, Inc., and Norfolk Southern Railway Company...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-08

    ... part 1152 subpart F--Exempt Abandonments and Discontinuances of Service for each carrier to discontinue service over an approximately 5.10-mile line of railroad extending from milepost 19.30 to milepost 24.40 , in Monmouth County, N.J. The line traverses United States Postal Service Zip Codes 07727 and 07728...

  15. Structure of 52 132Te80: The two-particle and two-hole spectrum of 50 132Sn82

    NASA Astrophysics Data System (ADS)

    Biswas, S.; Palit, R.; Navin, A.; Rejmund, M.; Bisoi, A.; Sarkar, M. Saha; Sarkar, S.; Bhattacharyya, S.; Biswas, D. C.; Caamaño, M.; Carpenter, M. P.; Choudhury, D.; Clément, E.; Danu, L. S.; Delaune, O.; Farget, F.; de France, G.; Hota, S. S.; Jacquot, B.; Lemasson, A.; Mukhopadhyay, S.; Nanal, V.; Pillay, R. G.; Saha, S.; Sethi, J.; Singh, Purnima; Srivastava, P. C.; Tandel, S. K.

    2016-03-01

    High-spin states in 132Te, an isotope with two proton particles and two neutron holes outside of the 132Sn doubly magic core, have been extended up to an excitation energy of 6.17 MeV. The prompt-delayed coincidence technique has been used to correlate states above the T1 /2=3.70 (9 ) μ s isomer in 132Te to the lower states using 232Th(7Li,f ) at 5.4 MeV/u and the Indian National Gamma Array (INGA). With 9Be(238U,f ) at 6.2 MeV/u and EXOGAM γ -array coupled with the VAMOS++ spectrometer, the level scheme was extended to higher excitation energies. The high-spin positive-parity states, above Jπ=10+ , in 132Te are expected to arise from the alignment of the particles in the high-j orbitals lying close to the Fermi surface, the π g7/2 2 , and the ν h11/2 -2 configurations. The experimental level scheme has been compared with the large scale shell model calculations. A reduction in the p -n interaction strength resulted in an improved agreement with the measurements up to the spin of 15 ℏ . In contrast, the comparison of the differences between the experiment and these calculations for the N =76 ,78 isotones of Te and Sn shows the increasing disagreement as a function of spin, where the magnitude is larger in Te than in Sn. This behavior could possibly be attributed to the deficiencies in the p -n correlations, in addition to the n -n correlations in Sn.

  16. Particle sorting in Filter Porous Media and in Sediment Transport: A Numerical and Experimental Study

    NASA Astrophysics Data System (ADS)

    Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I. N.; Antoun, T.; Smith, J.; Hall, R.; Woodson, S.

    2014-12-01

    Understanding the flow of fines, particulate sorting in porous media and fractured media during sediment transport is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. For example, the safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique. The numerical code is validated to experiments conducted at the USACE and ERDC. These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. We have extended these results to more realistic heterogeneous population particulates for sediment transport. This work performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security, Science and Technology Directorate, Homeland Security Advanced Research Projects Agency.

  17. Comprehensive Marine Particle Analysis System

    DTIC Science & Technology

    2000-09-30

    23 00 26 00 29 00 32 00 35 00 38 00 41 00 44 00 47 00 50 00 ESD (microns) N um be r m -3 trichodesm ium Salp and Doliolid pteropod Protist gelatinous...capability of the new sensor system. RESULTS •Data gathered from HRS deployments provided information for HyCODE / ECOHAB models. •Multiple- season optical and

  18. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    PubMed

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  19. Pairwise-interaction extended point-particle model for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Akiki, G.; Moore, W. C.; Balachandar, S.

    2017-12-01

    In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.

  20. Detector Simulations with DD4hep

    NASA Astrophysics Data System (ADS)

    Petrič, M.; Frank, M.; Gaede, F.; Lu, S.; Nikiforou, N.; Sailer, A.

    2017-10-01

    Detector description is a key component of detector design studies, test beam analyses, and most of particle physics experiments that require the simulation of more and more different detector geometries and event types. This paper describes DD4hep, which is an easy-to-use yet flexible and powerful detector description framework that can be used for detector simulation and also extended to specific needs for a particular working environment. Linear collider detector concepts ILD, SiD and CLICdp as well as detector development collaborations CALICE and FCal have chosen to adopt the DD4hep geometry framework and its DDG4 pathway to Geant4 as its core simulation and reconstruction tools. The DDG4 plugins suite includes a wide variety of input formats, provides access to the Geant4 particle gun or general particles source and allows for handling of Monte Carlo truth information, eg. by linking hits and the primary particle that caused them, which is indispensable for performance and efficiency studies. An extendable array of segmentations and sensitive detectors allows the simulation of a wide variety of detector technologies. This paper shows how DD4hep allows to perform complex Geant4 detector simulations without compiling a single line of additional code by providing a palette of sub-detector components that can be combined and configured via compact XML files. Simulation is controlled either completely via the command line or via simple Python steering files interpreted by a Python executable. It also discusses how additional plugins and extensions can be created to increase the functionality.

  1. Einstein-Yang-Mills scattering amplitudes from scattering equations

    NASA Astrophysics Data System (ADS)

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    2015-01-01

    We present the building blocks that can be combined to produce tree-level S-matrix elements of a variety of theories with various spins mixed in arbitrary dimensions. The new formulas for the scattering of n massless particles are given by integrals over the positions of n points on a sphere restricted to satisfy the scattering equations. As applications, we obtain all single-trace amplitudes in Einstein-Yang-Mills (EYM) theory, and generalizations to include scalars. Also in EYM but extended by a B-field and a dilaton, we present all double-trace gluon amplitudes. The building blocks are made of Pfaffians and Parke-Taylor-like factors of subsets of particle labels.

  2. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  3. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  4. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-06-01

    The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The codemore » has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  5. Crystallization and X-ray analysis of the T = 4 particle of hepatitis B capsid protein with an N-terminal extension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Wen Siang; McNae, Iain W.; Ho, Kok Lian

    2007-08-01

    Hepatitis B virus capsids have significant potential as carriers for immunogenic peptides. The crystal structure of the T = 4 particle of hepatitis B core protein containing an N-terminal extension reveals that the fusion peptide is exposed on the exterior of the particle. Hepatitis B core (HBc) particles have been extensively exploited as carriers for foreign immunological epitopes in the development of multicomponent vaccines and diagnostic reagents. Crystals of the T = 4 HBc particle were grown in PEG 20 000, ammonium sulfate and various types of alcohols. A temperature jump from 277 or 283 to 290 K was foundmore » to enhance crystal growth. A crystal grown using MPD as a cryoprotectant diffracted X-rays to 7.7 Å resolution and data were collected to 99.6% completeness at 8.9 Å. The crystal belongs to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 352.3, b = 465.5, c = 645.0 Å. The electron-density map reveals a protrusion that is consistent with the N-terminus extending out from the surface of the capsid. The structure presented here supports the idea that N-terminal insertions can be exploited in the development of diagnostic reagents, multicomponent vaccines and delivery vehicles into mammalian cells.« less

  6. Perpendicular and Parallel Ion Stochastic Heating by Kinetic Alfvén Wave Turbulence in the Solar Wind

    NASA Astrophysics Data System (ADS)

    Hoppock, I. W.; Chandran, B. D. G.

    2017-12-01

    The dissipation of turbulence is a prime candidate to explain the heating of collisionless plasmas like the solar wind. We consider the heating of protons and alpha particles using test particle simulations with a broad spectrum of randomly phased kinetic Alfvén waves (KAWs). Previous research extensively simulated and analytically considered stochastic heating at low plasma beta for conditions similar to coronal holes and the near-sun solar wind. We verify the analytical models of proton and alpha particle heating rates, and extend these simulations to plasmas with beta of order unity like in the solar wind at 1 au. Furthermore, we consider cases with very large beta of order 100, relevant to other astrophysical plasmas. We explore the parameter dependency of the critical KAW amplitude that breaks the gyro-center approximation and leads to stochastic gyro-orbits of the particles. Our results suggest that stochastic heating by KAW turbulence is an efficient heating mechanisms for moderate to high beta plasmas.

  7. Simulation of a complete X-ray digital radiographic system for industrial applications.

    PubMed

    Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H

    2018-05-19

    Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Performance of a multilevel quantum heat engine of an ideal N-particle Fermi system.

    PubMed

    Wang, Rui; Wang, Jianhui; He, Jizhou; Ma, Yongli

    2012-08-01

    We generalize the quantum heat engine (QHE) model which was first proposed by Bender et al. [J. Phys. A 33, 4427 (2000)] to the case in which an ideal Fermi gas with an arbitrary number N of particles in a box trap is used as the working substance. Besides two quantum adiabatic processes, the engine model contains two isoenergetic processes, during which the particles are coupled to energy baths at a high constant energy E(h) and a low constant energy E(c), respectively. Directly employing the finite-time thermodynamics, we find that the power output is enhanced by increasing particle number N (or decreasing minimum trap size L(A)) for given L(A) (or N), without reduction in the efficiency. By use of global optimization, the efficiency at possible maximum power output (EPMP) is found to be universal and independent of any parameter contained in the engine model. For an engine model with any particle-number N, the efficiency at maximum power output (EMP) can be determined under the condition that it should be closest to the EPMP. Moreover, we extend the heat engine to a more general multilevel engine model with an arbitrary 1D power-law potential. Comparison between our engine model and the Carnot cycle shows that, under the same conditions, the efficiency η = 1 - E(c)/E(h) of the engine cycle is bounded from above the Carnot value η(c) =1 - T(c)/T(h).

  9. Comparison study of photon attenuation characteristics of Lead-Boron Polyethylene by MCNP code, XCOM and experimental data

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jia, Mingchun; Gong, Junjun; Xia, Wenming

    2017-08-01

    The linear attenuation coefficient, mass attenuation coefficient and mean free path of various Lead-Boron Polyethylene (PbBPE) samples which can be used as the photon shielding materials in marine reactor have been simulated using the Monte Carlo N-Particle (MCNP)-5 code. The MCNP simulation results are in good agreement with the XCOM values and the reported experimental data for source Cesium-137 and Cobalt-60. Thus, this method based on MCNP can be used to simulate the photon attenuation characteristics of various types of PbBPE materials.

  10. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  11. The Forest Method as a New Parallel Tree Method with the Sectional Voronoi Tessellation

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Mori, Masao; Yoshii, Yuzuru

    1999-09-01

    We have developed a new parallel tree method which will be called the forest method hereafter. This new method uses the sectional Voronoi tessellation (SVT) for the domain decomposition. The SVT decomposes a whole space into polyhedra and allows their flat borders to move by assigning different weights. The forest method determines these weights based on the load balancing among processors by means of the overload diffusion (OLD). Moreover, since all the borders are flat, before receiving the data from other processors, each processor can collect enough data to calculate the gravity force with precision. Both the SVT and the OLD are coded in a highly vectorizable manner to accommodate on vector parallel processors. The parallel code based on the forest method with the Message Passing Interface is run on various platforms so that a wide portability is guaranteed. Extensive calculations with 15 processors of Fujitsu VPP300/16R indicate that the code can calculate the gravity force exerted on 105 particles in each second for some ideal dark halo. This code is found to enable an N-body simulation with 107 or more particles for a wide dynamic range and is therefore a very powerful tool for the study of galaxy formation and large-scale structure in the universe.

  12. Pion and electromagnetic contribution to dose: Comparisons of HZETRN to Monte Carlo results and ISS data

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Reddell, Brandon; Bahadori, Amir; Norman, Ryan B.; Badavi, Francis F.

    2013-07-01

    Recent work has indicated that pion production and the associated electromagnetic (EM) cascade may be an important contribution to the total astronaut exposure in space. Recent extensions to the deterministic space radiation transport code, HZETRN, allow the production and transport of pions, muons, electrons, positrons, and photons. In this paper, the extended code is compared to the Monte Carlo codes, Geant4, PHITS, and FLUKA, in slab geometries exposed to galactic cosmic ray (GCR) boundary conditions. While improvements in the HZETRN transport formalism for the new particles are needed, it is shown that reasonable agreement on dose is found at larger shielding thicknesses commonly found on the International Space Station (ISS). Finally, the extended code is compared to ISS data on a minute-by-minute basis over a seven day period in 2001. The impact of pion/EM production on exposure estimates and validation results is clearly shown. The Badhwar-O'Neill (BO) 2004 and 2010 models are used to generate the GCR boundary condition at each time-step allowing the impact of environmental model improvements on validation results to be quantified as well. It is found that the updated BO2010 model noticeably reduces overall exposure estimates from the BO2004 model, and the additional production mechanisms in HZETRN provide some compensation. It is shown that the overestimates provided by the BO2004 GCR model in previous validation studies led to deflated uncertainty estimates for environmental, physics, and transport models, and allowed an important physical interaction (π/EM) to be overlooked in model development. Despite the additional π/EM production mechanisms in HZETRN, a systematic under-prediction of total dose is observed in comparison to Monte Carlo results and measured data.

  13. Light transport feature for SCINFUL.

    PubMed

    Etaati, G R; Ghal-Eh, N

    2008-03-01

    An extended version of the scintillator response function prediction code SCINFUL has been developed by incorporating PHOTRACK, a Monte Carlo light transport code. Comparisons of calculated and experimental results for organic scintillators exposed to neutrons show that the extended code improves the predictive capability of SCINFUL.

  14. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  15. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE PAGES

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    2017-05-17

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, H.R.

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  17. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the mean-field model with Monte Carlo simulations for finite N. We thus determine how the mean first passage time (MFPT) for filling the target depends on N and n.

  18. Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1999-01-01

    Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.

  19. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  20. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  1. Chemically Reacting One-Dimensional Gas-Particle Flows

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Penny, M. M.

    1975-01-01

    The governing equations for the one-dimensional flow of a gas-particle system are discussed. Gas-particle effects are coupled via the system momentum and energy equations with the gas assumed to be chemically frozen or in chemical equilibrium. A computer code for calculating the one-dimensional flow of a gas-particle system is discussed and a user's input guide presented. The computer code provides for the expansion of the gas-particle system from a specified starting velocity and nozzle inlet geometry. Though general in nature, the final output of the code is a startline for initiating the solution of a supersonic gas-particle system in rocket nozzles. The startline includes gasdynamic data defining gaseous startline points from the nozzle centerline to the nozzle wall and particle properties at points along the gaseous startline.

  2. Non-Born-Oppenheimer electronic and nuclear densities for a Hooke-Calogero three-particle model: non-uniqueness of density-derived molecular structure.

    PubMed

    Ludeña, E V; Echevarría, L; Lopez, X; Ugalde, J M

    2012-02-28

    We consider the calculation of non-Born-Oppenheimer, nBO, one-particle densities for both electrons and nuclei. We show that the nBO one-particle densities evaluated in terms of translationally invariant coordinates are independent of the wavefunction describing the motion of center of mass of the whole system. We show that they depend, however, on an arbitrary reference point from which the positions of the vectors labeling the particles are determined. We examine the effect that this arbitrary choice has on the topology of the one-particle density by selecting the Hooke-Calogero model of a three-body system for which expressions for the one-particle densities can be readily obtained in analytic form. We extend this analysis to the one-particle densities obtained from full Coulomb interaction wavefunctions for three-body systems. We conclude, in view of the fact that there is a close link between the choice of the reference point and the topology of one-particle densities that the molecular structure inferred from the topology of these densities is not unique. We analyze the behavior of one-particle densities for the Hooke-Calogero Born-Oppenheimer, BO, wavefunction and show that topological transitions are also present in this case for a particular mass value of the light particles even though in the BO regime the nuclear masses are infinite. In this vein, we argue that the change in topology caused by variation of the mass ratio between light and heavy particles does not constitute a true indication in the nBO regime of the emergence of molecular structure.

  3. Non-Born-Oppenheimer electronic and nuclear densities for a Hooke-Calogero three-particle model: Non-uniqueness of density-derived molecular structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludena, E. V.; Echevarria, L.; Lopez, X.

    2012-02-28

    We consider the calculation of non-Born-Oppenheimer, nBO, one-particle densities for both electrons and nuclei. We show that the nBO one-particle densities evaluated in terms of translationally invariant coordinates are independent of the wavefunction describing the motion of center of mass of the whole system. We show that they depend, however, on an arbitrary reference point from which the positions of the vectors labeling the particles are determined. We examine the effect that this arbitrary choice has on the topology of the one-particle density by selecting the Hooke-Calogero model of a three-body system for which expressions for the one-particle densities canmore » be readily obtained in analytic form. We extend this analysis to the one-particle densities obtained from full Coulomb interaction wavefunctions for three-body systems. We conclude, in view of the fact that there is a close link between the choice of the reference point and the topology of one-particle densities that the molecular structure inferred from the topology of these densities is not unique. We analyze the behavior of one-particle densities for the Hooke-Calogero Born-Oppenheimer, BO, wavefunction and show that topological transitions are also present in this case for a particular mass value of the light particles even though in the BO regime the nuclear masses are infinite. In this vein, we argue that the change in topology caused by variation of the mass ratio between light and heavy particles does not constitute a true indication in the nBO regime of the emergence of molecular structure.« less

  4. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Hales; R. L. Williamson; S. R. Novascone

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less

  5. Critical Gradient Behavior of Alfvén Eigenmode Induced Fast-Ion Transport in Phase Space

    NASA Astrophysics Data System (ADS)

    Collins, C. S.; Pace, D. C.; van Zeeland, M. A.; Heidbrink, W. W.; Stagner, L.; Zhu, Y. B.; Kramer, G. J.; Podesta, M.; White, R. B.

    2016-10-01

    Experiments on DIII-D have shown that energetic particle (EP) transport suddenly increases when multiple Alfvén eigenmodes (AEs) cause particle orbits to become stochastic. Several key features have been observed; (1) the transport threshold is phase-space dependent and occurs above the AE linear stability threshold, (2) EP losses become intermittent above threshold and appear to depend on the types of AEs present, and (3) stiff transport causes the EP density profile to remain unchanged even if the source increases. Theoretical analysis using the NOVA and ORBIT codes shows that the threshold corresponds to when particle orbits become stochastic due to wave-particle resonances with AEs in the region of phase space measured by the diagnostics. The kick model in NUBEAM (TRANSP) is used to evolve the EP distribution function to study which modes cause the most transport and further characterize intermittent bursts of EP losses, which are associated with large scale redistribution through the domino effect. Work supported by the US DOE under DE-FC02-04ER54698.

  6. Proton Dose Assessment to the Human Eye Using Monte Carlo N-Particle Transport Code (MCNPX)

    DTIC Science & Technology

    2006-08-01

    current treatments are applied using an infrared diode laser 10 (projecting a spot size of 2-3 mm), used for about 1 minute per exposure. The laser heats...1983. Shultis J, Faw R. An MCNP Primer. Available at: http:// ww2 .mne.ksu.edu/-jks/MCNPprmr.pdf. Accessed 3 January 2006. Stys P, Lopachin R

  7. Generation of Werner states via collective decay of coherently driven atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Girish S.; Kapale, Kishore T.

    2006-02-15

    We show deterministic generation of Werner states as a steady state of the collective decay dynamics of a pair of neutral atoms coupled to a leaky cavity and strong coherent drive. We also show how the scheme can be extended to generate a 2N-particle analogue of the bipartite Werner states.

  8. An implementation of a tree code on a SIMD, parallel computer

    NASA Technical Reports Server (NTRS)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  9. PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils

    NASA Technical Reports Server (NTRS)

    Johnson, Scott; Walton, Otis; Settgast, Randolph

    2013-01-01

    PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.

  10. Accelerating Pseudo-Random Number Generator for MCNP on GPU

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu

    2010-09-01

    Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.

  11. Construction of boundary-surface-based Chinese female astronaut computational phantom and proton dose estimation

    PubMed Central

    Sun, Wenjuan; JIA, Xianghong; XIE, Tianwu; XU, Feng; LIU, Qian

    2013-01-01

    With the rapid development of China's space industry, the importance of radiation protection is increasingly prominent. To provide relevant dose data, we first developed the Visible Chinese Human adult Female (VCH-F) phantom, and performed further modifications to generate the VCH-F Astronaut (VCH-FA) phantom, incorporating statistical body characteristics data from the first batch of Chinese female astronauts as well as reference organ mass data from the International Commission on Radiological Protection (ICRP; both within 1% relative error). Based on cryosection images, the original phantom was constructed via Non-Uniform Rational B-Spline (NURBS) boundary surfaces to strengthen the deformability for fitting the body parameters of Chinese female astronauts. The VCH-FA phantom was voxelized at a resolution of 2 × 2 × 4 mm3for radioactive particle transport simulations from isotropic protons with energies of 5000–10 000 MeV in Monte Carlo N-Particle eXtended (MCNPX) code. To investigate discrepancies caused by anatomical variations and other factors, the obtained doses were compared with corresponding values from other phantoms and sex-averaged doses. Dose differences were observed among phantom calculation results, especially for effective dose with low-energy protons. Local skin thickness shifts the breast dose curve toward high energy, but has little impact on inner organs. Under a shielding layer, organ dose reduction is greater for skin than for other organs. The calculated skin dose per day closely approximates measurement data obtained in low-Earth orbit (LEO). PMID:23135158

  12. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    DOE PAGES

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique valuemore » but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.« less

  13. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.

    2015-04-01

    Threshold versions of Schloegl's model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  14. ORBIT modelling of fast particle redistribution induced by sawtooth instability

    NASA Astrophysics Data System (ADS)

    Kim, Doohyun; Podestà, Mario; Poli, Francesca; Princeton Plasma Physics Laboratory Team

    2017-10-01

    Initial tests on NSTX-U show that introducing energy selectivity for sawtooth (ST) induced fast ion redistribution improves the agreement between experimental and simulated quantities, e.g. neutron rate. Thus, it is expected that a proper description of the fast particle redistribution due to ST can improve the modelling of ST instability and interpretation of experiments using a transport code. In this work, we use ORBIT code to characterise the redistribution of fast particles. In order to simulate a ST crash, a spatial and temporal displacement is implemented as ξ (ρ , t , θ , ϕ) = ∑ξmn (ρ , t) cos (mθ + nϕ) to produce perturbed magnetic fields from the equilibrium field B-> , δB-> = ∇ × (ξ-> × B->) , which affect the fast particle distribution. From ORBIT simulations, we find suitable amplitudes of ξ for each ST crash to reproduce the experimental results. The comparison of the simulation and the experimental results will be discussed as well as the dependence of fast ion redistribution on fast ion phase space variables (i.e. energy, magnetic moment and toroidal angular momentum). Work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Contract Number DE-AC02-09CH11466.

  15. Applications of the microdosimetric function implemented in the macroscopic particle transport simulation code PHITS.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji

    2012-01-01

    Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.

  16. Neutron-induced fission cross-section measurement of 234U with quasi-monoenergetic beams in the keV and MeV range using micromegas detectors

    NASA Astrophysics Data System (ADS)

    Tsinganis, A.; Kokkoris, M.; Vlastou, R.; Kalamara, A.; Stamatopoulos, A.; Kanellakopoulos, A.; Lagoyannis, A.; Axiotis, M.

    2017-09-01

    Accurate data on neutron-induced fission cross-sections of actinides are essential for the design of advanced nuclear reactors based either on fast neutron spectra or alternative fuel cycles, as well as for the reduction of safety margins of existing and future conventional facilities. The fission cross-section of 234U was measured at incident neutron energies of 560 and 660 keV and 7.5 MeV with a setup based on `microbulk' Micromegas detectors and the same samples previously used for the measurement performed at the CERN n_TOF facility (Karadimos et al., 2014). The 235U fission cross-section was used as reference. The (quasi-)monoenergetic neutron beams were produced via the 7Li(p,n) and the 2H(d,n) reactions at the neutron beam facility of the Institute of Nuclear and Particle Physics at the `Demokritos' National Centre for Scientific Research. A detailed study of the neutron spectra produced in the targets and intercepted by the samples was performed coupling the NeuSDesc and MCNPX codes, taking into account the energy spread, energy loss and angular straggling of the beam ions in the target assemblies, as well as contributions from competing reactions and neutron scattering in the experimental setup. Auxiliary Monte-Carlo simulations were performed with the FLUKA code to study the behaviour of the detectors, focusing particularly on the reproduction of the pulse height spectra of α-particles and fission fragments (using distributions produced with the GEF code) for the evaluation of the detector efficiency. An overview of the developed methodology and preliminary results are presented.

  17. Energetic ion acceleration at collisionless shocks

    NASA Technical Reports Server (NTRS)

    Decker, R. B.; Vlahos, L.

    1985-01-01

    An example is presented from a test particle simulation designed to study ion acceleration at oblique turbulent shocks. For conditions appropriate at interplanetary shocks near 1 AU, it is found that a shock with theta sub B n = 60 deg is capable of producing an energy spectrum extending from 10 keV to approx. 1 MeV in approx 1 hour. In this case total energy gains result primarily from several separate episodes of shock drift acceleration, each of which occurs when particles are scattered back to the shock by magnetic fluctuations in the shock vicinity.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisenti, N.; Gaebler, C. P. E.; Lynn, T. W.

    Measuring an entangled state of two particles is crucial to many quantum communication protocols. Yet Bell-state distinguishability using a finite apparatus obeying linear evolution and local measurement is theoretically limited. We extend known bounds for Bell-state distinguishability in one and two variables to the general case of entanglement in n two-state variables. We show that at most 2{sup n+1}-1 classes out of 4{sup n} hyper-Bell states can be distinguished with one copy of the input state. With two copies, complete distinguishability is possible. We present optimal schemes in each case.

  19. 3D Multispecies Nonlinear Perturbative Particle Simulation of Intense Nonneutral Particle Beams (Research supported by the Department of Energy and the Short Pulse Spallation Source Project and LANSCE Division of LANL.)

    NASA Astrophysics Data System (ADS)

    Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li

    1999-11-01

    The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.

  20. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  1. Iron-carbide cluster thermal dynamics for catalyzed carbon nanotube growth

    NASA Astrophysics Data System (ADS)

    Ding, Feng; Bolton, Kim; Rosén, Arne

    2004-07-01

    Molecular dynamics simulations have been used to study the thermal behavior of FeN-mCm clusters where N, the total number of atoms, extends up to 2400. Comparison of the computed results with experimental data shows that the simulations yield the correct trends for the liquid-solid region of the iron-carbide phase diagram as well as the correct dependence of cluster melting point as a function of cluster size. The calculation indicates that, when carbon nanotubes (CNTs) are grown on large (>3-4 nm) catalyst particles at low temperatures (<1200 K), the catalyst particles are not completely molten. It is argued that the mechanism of CNT growth under these conditions may be governed by the surface melting of the cluster. .

  2. Four Point Measurements of the Foreshock

    NASA Technical Reports Server (NTRS)

    Sibeck, D. G.; Omidi, N.; Angelopoulos, V.

    2008-01-01

    Hybrid code numerical simulations accurately predict the properties of the Earth's foreshock, a region populated by solar wind particles heated and reflected by their interaction with the bow shock. The thermal pressures associated with the reflected population suffice to substantially modify the oncoming solar wind, substantially reducing densities, velocities, and magnetic field strengths, but enhance temperatures. Enhanced thermal pressures cause the foreshock to expand at the expense of the ambient solar wind, creating a boundary that extends approx.10 RE upstream which is marked by enhanced densities and magnetic field strengths, and flows deflected away from the foreshock. We present a case study of Cluster plasma and magnetic field observations of this boundary.

  3. The interactive scaling hypothesis and dynamic textures in nematics

    NASA Astrophysics Data System (ADS)

    Rozhkov, S.

    2002-03-01

    A new approach to the description of the dynamic textures (DT) in the systems with continuous symmetry is proposed. Such textures take place in various dissipative motions of liquid crystals with participation of different extended objects: topological defects in the order parameter field and suspended particles. The main idea of the approach is to transfer the law of interaction between the extended objects (hedgehogs, disclinations, boojums, colloidal particles, etc.) to the host system by redefining its spatiotemporal scales. I call this procedure the interactive scaling (IS). In a number of experiments with nematics^1-3 a pair of objects behaves itself as two point particles interacting by means of the attractive force F_a=CK(a/r)^m-1, where r is the separation between particle centers, K is the Frank elastic constant, C is a constant, and m>= 1. The dynamics of the objects is purely dissipative with the Stokes-type drag due to the reorientation of the order parameter (director) field in some vicinities of the objects. For the pair's dissipative dynamics in nematics we find the velocity v of reducing the interparticle distance r: v(r)=v_c(a/r)^m-1, with v_c=2CK/lη, where η is the orientational viscosity and l is the drag length. The parameters C, a and l can be estimated theoretically and defined experimentally. The IS hypothesis postulates the time dependence of the director field in the form n (r,; t)= n(x+ɛ v(2|x|)t/2,;y,;z) to yield the DT equation for n(r): (2νɛ^m/ax^m-1)partial_xn=nabla^2 n+(nablan)^2n, where ν=Ca/l is the IS ratio, ɛ=sign(x) and v(2|x|) coincides with the velocity of approaching the pair's particles in the x direction. This equation corresponds to the "one-constant approximation" and the absence of fluid flow. For m=2 (the "Coulombic" force) in the planar case: n=[\\cosΦ,sinΦ,0], we find the disclination solution of the DT equation: Φ=(k/2)C_νint_0^φ\\cos^2νφdφ, where k is an integer, φ is the polar angle and C_ν=surdπΓ(ν+1)/Γ(ν+1/2). ν=0 gives Frank's disclinations. 1. O.D.Lavrentovich and S.S.Rozhkov, JETP Lett. 47, 254 (1988). 2. A.Pargellis, N.Turok and B.Yurke, Phys.Rev.Lett. 67, 1570 (1991). 3. P.Poulin, V.Cabuil and D.A.Weiz, Phys.Rev.Lett. 79, 4862 (1997).

  4. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  5. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  6. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  7. Isocratic and gradient impedance plot analysis and comparison of some recently introduced large size core-shell and fully porous particles.

    PubMed

    Vanderheyden, Yoachim; Cabooter, Deirdre; Desmet, Gert; Broeckhoven, Ken

    2013-10-18

    The intrinsic kinetic performance of three recently commercialized large size (≥4μm) core-shell particles packed in columns with different lengths has been measured and compared with that of standard fully porous particles of similar and smaller size (5 and 3.5μm, respectively). The kinetic performance is compared in both absolute (plot of t0 versus the plate count N or the peak capacity np for isocratic and gradient elution, respectively) and dimensionless units. The latter is realized by switching to so-called impedance plots, a format which has been previously introduced (as a plot of t0/N(2) or E0 versus Nopt/N) and has in the present study been extended from isocratic to gradient elution (where the impedance plot corresponds to a plot of t0/np(4) versus np,opt(2)/np(2)). Both the isocratic and gradient impedance plot yielded a very similar picture: the clustered impedance plot curves divide into two distinct groups, one for the core-shell particles (lowest values, i.e. best performance) and one for the fully porous particles (highest values), confirming the clear intrinsic kinetic advantage of core-shell particles. If used around their optimal flow rate, the core-shell particles displayed a minimal separation impedance that is about 40% lower than the fully porous particles. Even larger gains in separation speed can be achieved in the C-term regime. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  9. Scaling theory for the quasideterministic limit of continuous bifurcations.

    PubMed

    Kessler, David A; Shnerb, Nadav M

    2012-05-01

    Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.

  10. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  11. A fast ellipse extended target PHD filter using box-particle implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Yongquan; Ji, Hongbing; Hu, Qi

    2018-01-01

    This paper presents a box-particle implementation of the ellipse extended target probability hypothesis density (ET-PHD) filter, called the ellipse extended target box particle PHD (EET-BP-PHD) filter, where the extended targets are described as a Poisson model developed by Gilholm et al. and the term "box" is here equivalent to the term "interval" used in interval analysis. The proposed EET-BP-PHD filter is capable of dynamically tracking multiple ellipse extended targets and estimating the target states and the number of targets, in the presence of clutter measurements, false alarms and missed detections. To derive the PHD recursion of the EET-BP-PHD filter, a suitable measurement likelihood is defined for a given partitioning cell, and the main implementation steps are presented along with the necessary box approximations and manipulations. The limitations and capabilities of the proposed EET-BP-PHD filter are illustrated by simulation examples. The simulation results show that a box-particle implementation of the ET-PHD filter can avoid the high number of particles and reduce computational burden, compared to a particle implementation of that for extended target tracking.

  12. Extending the Coyote emulator to dark energy models with standard w {sub 0}- w {sub a} parametrization of the equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casarini, L.; Bonometto, S.A.; Tessarotto, E.

    2016-08-01

    We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less

  13. On-Chip Transport of Biological Fluids in MEMS Devices

    DTIC Science & Technology

    1999-02-01

    this model has been extended for multi-dimensional geometries to simulate electroosmotic flow in microdevices. Electrophoresis model in CFD- ACE + will...integrated with CFD- ACE +. 7.0 REFERENCES 1. N. A. Patankar and H. H. Hu, "Numerical Simulation of Electroosmotic Flow," Analytical Chemistry, 70...Electroosmosis has been developed and successfully integrated with CFD- ACE + code. (ii) Extension of the above-mentioned model to simulate

  14. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.

  15. Multiple Detector Optimization for Hidden Radiation Source Detection

    DTIC Science & Technology

    2015-03-26

    important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code

  16. Spatially Resolved Analysis of Amines Using a Fluorescence Molecular Probe: Molecular Analysis of IDPs

    NASA Technical Reports Server (NTRS)

    Clemett, S. J.; Messenger, S.; Thomas-Keprta, K. L.; Wentworth, S. J.; Robinson, G. A.; McKay, D. S.

    2002-01-01

    Some Interplanetary Dust Particles (IDPs) have large isotope anomalies in H and N. To address the nature of the carrier phase, we are developing a procedure to spatially resolve the distribution of organic species on IDP thin sections utilizing fluorescent molecular probes. Additional information is contained in the original extended abstract.

  17. CT-based MCNPX dose calculations for gynecology brachytherapy employing a Henschke applicator

    NASA Astrophysics Data System (ADS)

    Yu, Pei-Chieh; Nien, Hsin-Hua; Tung, Chuan-Jong; Lee, Hsing-Yi; Lee, Chung-Chi; Wu, Ching-Jung; Chao, Tsi-Chian

    2017-11-01

    The purpose of this study is to investigate the dose perturbation caused by the metal ovoid structures of a Henschke applicator using Monte Carlo simulation in a realistic phantom. The Henschke applicator has been widely used for gynecologic patients treated by brachytherapy in Taiwan. However, the commercial brachytherapy planning system (BPS) did not properly evaluate the dose perturbation caused by its metal ovoid structures. In this study, Monte Carlo N-Particle Transport Code eXtended (MCNPX) was used to evaluate the brachytherapy dose distribution of a Henschke applicator embedded in a Plastic water phantom and a heterogeneous patient computed tomography (CT) phantom. The dose comparison between the MC simulations and film measurements for a Plastic water phantom with Henschke applicator were in good agreement. However, MC dose with the Henschke applicator showed significant deviation (-80.6%±7.5%) from those without Henschke applicator. Furthermore, the dose discrepancy in the heterogeneous patient CT phantom and Plastic water phantom CT geometries with Henschke applicator showed 0 to -26.7% dose discrepancy (-8.9%±13.8%). This study demonstrates that the metal ovoid structures of Henschke applicator cannot be disregard in brachytherapy dose calculation.

  18. Alternate Operating Scenarios for NDCX-II

    NASA Astrophysics Data System (ADS)

    Sharp, W. M.; Friedman, A.; Grote, D. P.; Cohen, R. H.; Lund, S. M.; Vay, J.-L.; Waldron, W. L.; Yeun, A.

    2011-10-01

    NDCX-II is an accelerator facility being built at LBNL to study ion-heated warm dense matter and aspects of ion-driven targets for inertial-fusion energy. The baseline design calls for using twelve induction cells to accelerate 40 nC of Li+ ions to 1.2 MeV. During commissioning, though, we plan to extend the source lifetime by extracting less total charge. For operational flexibility, the option of using a helium plasma source is also being investigated. Over time, we expect that NDCX-II will be upgraded to substantially higher energies, necessitating the use of heavier ions to keep a suitable deposition range in targets. Each of these options requires development of an alternate acceleration schedule and the associated transverse focusing. The schedules here are first worked out with a fast-running 1-D particle-in-cell code ASP, then 2-D and 3-D Warp simulations are used to verify the 1-D results and to design transverse focusing. Work performed under the auspices of U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344 and by LBNL under Contract DE-AC03-76SF00098.

  19. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  20. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  1. A Wideband Fast Multipole Method for the two-dimensional complex Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Cho, Min Hyung; Cai, Wei

    2010-12-01

    A Wideband Fast Multipole Method (FMM) for the 2D Helmholtz equation is presented. It can evaluate the interactions between N particles governed by the fundamental solution of 2D complex Helmholtz equation in a fast manner for a wide range of complex wave number k, which was not easy with the original FMM due to the instability of the diagonalized conversion operator. This paper includes the description of theoretical backgrounds, the FMM algorithm, software structures, and some test runs. Program summaryProgram title: 2D-WFMM Catalogue identifier: AEHI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4636 No. of bytes in distributed program, including test data, etc.: 82 582 Distribution format: tar.gz Programming language: C Computer: Any Operating system: Any operating system with gcc version 4.2 or newer Has the code been vectorized or parallelized?: Multi-core processors with shared memory RAM: Depending on the number of particles N and the wave number k Classification: 4.8, 4.12 External routines: OpenMP ( http://openmp.org/wp/) Nature of problem: Evaluate interaction between N particles governed by the fundamental solution of 2D Helmholtz equation with complex k. Solution method: Multilevel Fast Multipole Algorithm in a hierarchical quad-tree structure with cutoff level which combines low frequency method and high frequency method. Running time: Depending on the number of particles N, wave number k, and number of cores in CPU. CPU time increases as N log N.

  2. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  3. On the transition from the quantum to the classical regime for massive scalar particles: A spatiotemporal approach

    NASA Astrophysics Data System (ADS)

    Lusanna, Luca; Pauri, Massimo

    2014-08-01

    If the classical structure of space-time is assumed to define an a priori scenario for the formulation of quantum theory (QT), the coordinate representation of the solutions of the Schroedinger equation of a quantum system containing one ( N) massive scalar particle has a preferred status. Let us consider all of the solutions admitting a multipolar expansion of the probability density function (and more generally of the Wigner function) around a space-time trajectory to be properly selected. For every normalized solution there is a privileged trajectory implying the vanishing of the dipole moment of the multipolar expansion: it is given by the expectation value of the position operator . Then, the special subset of solutions which satisfy Ehrenfest's Theorem (named thereby Ehrenfest monopole wave functions (EMWF)), have the important property that this privileged classical trajectory is determined by a closed Newtonian equation of motion where the effective force is the Newtonian force plus non-Newtonian terms (of order ħ 2 or higher) depending on the higher multipoles of the probability distribution ρ. Note that the superposition of two EMWFs is not an EMWF, a result to be strongly hoped for, given the possible unwanted implications concerning classical spatial perception. These results can be extended to N-particle systems in such a way that, when N classical trajectories with all the dipole moments vanishing and satisfying Ehrenfest theorem are associated with the normalized wave functions of the N-body system, we get a natural transition from the 3 N-dimensional configuration space to the space-time. Moreover, these results can be extended to relativistic quantum mechanics. Consequently, in suitable states of N quantum particle which are EMWF, we get the "emergence" of corresponding "classical particles" following Newton-like trajectories in space-time. Note that all this holds true in the standard framework of quantum mechanics, i.e. assuming, in particular, the validity of Born's rule and the individual system interpretation of the wave function (no ensemble interpretation). These results are valid without any approximation (like ħ → 0, big quantum numbers, etc.). Moreover, we do not commit ourselves to any specific ontological interpretation of quantum theory (such as, e.g., the Bohmian one). We will argue that, in substantial agreement with Bohr's viewpoint, the macroscopic description of the preparation, certain intermediate steps and the detection of the final outcome of experiments involving massive particles are dominated by these classical "effective" trajectories. This approach can be applied to the point of view of de-coherence in the case of a diagonal reduced density matrix ρ red (an improper mixture) depending on the position variables of a massive particle and of a pointer. When both the particle and the pointer wave functions appearing in ρ red are EMWF, the expectation value of the particle and pointer position variables becomes a statistical average on a classical ensemble. In these cases an improper quantum mixture becomes a classical statistical one, thus providing a particular answer to an open problem of de-coherence about the emergence of classicality.

  4. Smoothed Particle Hydrodynamic Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  5. A Physical Model of Cosmogenic Nuclide Production in Stony and Iron Meteoroids on the Basis of Simulation Experiments

    NASA Astrophysics Data System (ADS)

    Leya, I.; Lange, H.-J.; Michel, R.; Meltzow, B.; Herpers, U.; Busemann, H.; Wieler, R.; Dittrich-Hannen, B.; Suter, M.; Kubik, P. W.

    1995-09-01

    By extending and improving earlier model calculations [1-4] of cosmogenic nuclide production by GCR particles in extraterrestrial matter, we can now present a physical model without free parameters for a consistent description of GCR production rates in stony and iron meteoroids. The model takes explicitely into account p and n-induced reactions. GCR 4He particles are considered only approximately. It is based on depth-size and bulk-chemistry-dependent spectra of primary and secondary protons and of secondary neutrons calculated by HET and MORSE codes within the HERMES code system [5] and on the cross sections of the underlying reactions. Comprehensive and reliable sets of proton cross sections from thresholds up to 2.6 GeV exist now for many cosmogenic nuclides (see [6] for a review). For n-induced reactions the situation is not so good. Only a few data at low energies and practically no data at higher energies exist. GCR production of cosmogenic nuclides in stony meteoroids is already dominated by neutron-induced reactions for most meteoroid radii. In iron meteoroids neutrons are even more important because of the high mass numbers of the bulk and of consequently higher multiplicities for production of secondary neutrons. In order to overcome this problem, the necessary excitation functions of neutron-induced reactions were determined from experimental thick-target production rates by least-squares unfolding procedures using the code STAYS'L [7]. The data were produced in laboratory experiments under completely controlled conditions [8-11]. The unfolding procedure starts from guess functions (from threshold up to 900 MeV) based on all available experimental neutron cross sections and on theoretical ones calculated by the AREL [12] code which is a relativistic version of the hybrid model of pre-equilibrium reactions [13]. With the new neutron cross sections it is possible to describe simultanously all data from the simulation experiments with an accuracy of better than 10 % and to calculate consistent cosmogenic nuclide production rates in stony and iron meteoroids. The new model calculations are so far valid for 10Be, 26Al, 36Cl, 41Ca, 53Mn as well as He, Ne and Ar isotopes. The new theoretical production rates are compared with measured depth profiles in stony and iron meteorites and will be discussed with respect to primary GCR spectra and preatmospheric radii and exposure histories of stony and iron meteoroids. Acknowledgement: This work was partially supported by the Deutsche Forschungsgemeinschaft and the Swiss National Science Foundation. References: [1] Michel R. et al. (1991) Meteoritics, 26, 221-242. [2] Michel R. et al. (1995) Planet. Space Sci., in press. [3] Bhandari N. et al. (1993) GCA, 57, 2361-2375. [4] Herpers U. et al. (1995) Planet. Space Sci., in press. [5] Cloth P. et al. (1988) JUEL-2203. [6] Michel R. (1994) in Nuclear Data for Science and Technology (J. K. Dickens, ed.), 337-343, Am. Nucl. Soc., La Grange Park. [7] Perrey F. G. (1977) Code STAYS'L, NEA Data Bank, OECD Paris. [8] Michel R. et al. (1986) Nucl. Instr. Meth. Phys. Res., B16, 61-82. [9] Michel R. et al. (1989) Nucl. Instr. Meth. Phys. Res., B42, 76-100. [10] Michel R. et al. (1993) J. Radioanal. Nucl. Chem., 169, 13-25. [11] Michel R. et al. (1994) in Nuclear Data for Science and Technology (J. K. Dickens, ed.), 377-379, Am. Nucl. Soc., La Grange Park. [12] Blann M. (1994) Code AREL, personal communication to R. Michel. [13] Blann M. (1972) Phys. Rev. Lett., 27, 337-340.

  6. Microgels for long-term storage of vitamins for extended spaceflight

    NASA Astrophysics Data System (ADS)

    Schroeder, R.

    2018-02-01

    Biocompatible materials that can encapsulate large amounts of nutrients while protecting them from degrading environmental influences are highly desired for extended manned spaceflight. In this study, alkaline-degradable microgels based on poly(N-vinylcaprolactam) (PVCL) were prepared and analysed with their regard to stabilise retinol which acts as a model vitamin (vitamin A1). It was investigated whether the secondary crosslinking of the particles with a polyphenol can prevent the isomerisation of biologically active all-trans retinol to biologically inactive cis-trans retinol. Both loading with retinol and secondary crosslinking of the particles was performed at room temperature to prevent an early degradation of the vitamin. This study showed that PVCL microgels drastically improve the water solubility of hydrophobic retinol. Additionally, it is demonstrated that the highly crosslinked microgel particles in aqueous solution can be utilised to greatly retard the light- and temperature-induced isomerisation process of retinol by a factor of almost 100 compared to pure retinol stored in ethanol. The use of microgels offers various advantages over other drug delivery systems as they exhibit enhanced biocompatibility and superior aqueous solubility.

  7. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  8. Extended optical model for fission

    DOE PAGES

    Sin, M.; Capote, R.; Herman, M. W.; ...

    2016-03-07

    A comprehensive formalism to calculate fission cross sections based on the extension of the optical model for fission is presented. It can be used for description of nuclear reactions on actinides featuring multi-humped fission barriers with partial absorption in the wells and direct transmission through discrete and continuum fission channels. The formalism describes the gross fluctuations observed in the fission probability due to vibrational resonances, and can be easily implemented in existing statistical reaction model codes. The extended optical model for fission is applied for neutron induced fission cross-section calculations on 234,235,238U and 239Pu targets. A triple-humped fission barrier ismore » used for 234,235U(n,f), while a double-humped fission barrier is used for 238U(n,f) and 239Pu(n,f) reactions as predicted by theoretical barrier calculations. The impact of partial damping of class-II/III states, and of direct transmission through discrete and continuum fission channels, is shown to be critical for a proper description of the measured fission cross sections for 234,235,238U(n,f) reactions. The 239Pu(n,f) reaction can be calculated in the complete damping approximation. Calculated cross sections for 235,238U(n,f) and 239Pu(n,f) reactions agree within 3% with the corresponding cross sections derived within the Neutron Standards least-squares fit of available experimental data. Lastly, the extended optical model for fission can be used for both theoretical fission studies and nuclear data evaluation.« less

  9. Ion channeling study of defects in compound crystals using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Turos, A.; Jozwik, P.; Nowicki, L.; Sathish, N.

    2014-08-01

    Ion channeling is a well-established technique for determination of structural properties of crystalline materials. Defect depth profiles have been usually determined basing on the two-beam model developed by Bøgh (1968) [1]. As long as the main research interest was focused on single element crystals it was considered as sufficiently accurate. New challenge emerged with growing technological importance of compound single crystals and epitaxial heterostructures. Overlap of partial spectra due to different sublattices and formation of complicated defect structures makes the two beam method hardly applicable. The solution is provided by Monte Carlo computer simulations. Our paper reviews principal aspects of this approach and the recent developments in the McChasy simulation code. The latter made it possible to distinguish between randomly displaced atoms (RDA) and extended defects (dislocations, loops, etc.). Hence, complex defect structures can be characterized by the relative content of these two components. The next refinement of the code consists of detailed parameterization of dislocations and dislocation loops. Defect profiles for variety of compound crystals (GaN, ZnO, SrTiO3) have been measured and evaluated using the McChasy code. Damage accumulation curves for RDA and extended defects revealed non monotonous defect buildup with some characteristic steps. Transition to each stage is governed by the different driving force. As shown by the complementary high resolution XRD measurements lattice strain plays here the crucial role and can be correlated with the concentration of extended defects.

  10. Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.

    2011-10-01

    We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of interpolated quantities in the solution of the Riemann Problem. The results of our tests demonstrate that GSPH is in fact a highly promising hydrodynamic scheme, also to be coupled to an N-body solver, for astrophysical and cosmological applications.

  11. OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George

    We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.

  12. TOPAS Tool for Particle Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, Joseph

    2013-05-30

    TOPAS lets users simulate the passage of subatomic particles moving through any kind of radiation therapy treatment system, can import a patient geometry, can record dose and other quantities, has advanced graphics, and is fully four-dimensional (3D plus time) to handle the most challenging time-dependent aspects of modern cancer treatments.TOPAS unlocks the power of the most accurate particle transport simulation technique, the Monte Carlo (MC) method, while removing the painstaking coding work such methods used to require. Research physicists can use TOPAS to improve delivery systems towards safer and more effective radiation therapy treatments, easily setting up and running complexmore » simulations that previously used to take months of preparation. Clinical physicists can use TOPAS to increase accuracy while reducing side effects, simulating patient-specific treatment plans at the touch of a button. TOPAS is designed as a “user code” layered on top of the Geant4 Simulation Toolkit. TOPAS includes the standard Geant4 toolkit, plus additional code to make Geant4 easier to control and to extend Geant4 functionality. TOPAS aims to make proton simulation both “reliable” and “repeatable.” “Reliable” means both accurate physics and a high likelihood to simulate precisely what the user intended to simulate, reducing issues of wrong units, wrong materials, wrong scoring locations, etc. “Repeatable” means not just getting the same result from one simulation to another, but being able to easily restore a previously used setup and reducing sources of error when a setup is passed from one user to another. TOPAS control system incorporates key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes In control files. TOPAS has been used to model proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and has demonstrated dose calculation based on patient-specific CT data.« less

  13. Modeling Particle Acceleration and Transport at a 2-D CME-Driven Shock

    NASA Astrophysics Data System (ADS)

    Hu, Junxiang; Li, Gang; Ao, Xianzhi; Zank, Gary P.; Verkhoglyadova, Olga

    2017-11-01

    We extend our earlier Particle Acceleration and Transport in the Heliosphere (PATH) model to study particle acceleration and transport at a coronal mass ejection (CME)-driven shock. We model the propagation of a CME-driven shock in the ecliptic plane using the ZEUS-3D code from 20 solar radii to 2 AU. As in the previous PATH model, the initiation of the CME-driven shock is simplified and modeled as a disturbance at the inner boundary. Different from the earlier PATH model, the disturbance is now longitudinally dependent. Particles are accelerated at the 2-D shock via the diffusive shock acceleration mechanism. The acceleration depends on both the parallel and perpendicular diffusion coefficients κ|| and κ⊥ and is therefore shock-obliquity dependent. Following the procedure used in Li, Shalchi, et al. (k href="#jgra53857-bib-0045"/>), we obtain the particle injection energy, the maximum energy, and the accelerated particle spectra at the shock front. Once accelerated, particles diffuse and convect in the shock complex. The diffusion and convection of these particles are treated using a refined 2-D shell model in an approach similar to Zank et al. (k href="#jgra53857-bib-0089"/>). When particles escape from the shock, they propagate along and across the interplanetary magnetic field. The propagation is modeled using a focused transport equation with the addition of perpendicular diffusion. We solve the transport equation using a backward stochastic differential equation method where adiabatic cooling, focusing, pitch angle scattering, and cross-field diffusion effects are all included. Time intensity profiles and instantaneous particle spectra as well as particle pitch angle distributions are shown for two example CME shocks.

  14. Simulation of the time structure of Extensive Air Showers with CORSIKA initiated by various primary particles at Alborz-I observatory level

    NASA Astrophysics Data System (ADS)

    Bahmanabadi, Mahmud; Moghaddam, Saba Mortazavi

    2018-05-01

    A detailed simulation of showers with various zenith angles in atmosphere produced by different primary particles including gamma, proton, carbon, and iron at Alborz-I observatory level (35∘43‧N, 51∘20‧E, 1200 m a.s.l= 890 gcm-2), in the energy range 3 × 1013 eV-3 × 1015 eV, has been performed by means of the CORSIKA Monte Carlo code. The aim of this study is to examine the time structure of secondary particles in Extensive Air Showers (EAS) produced by the different primary particles. For each primary particle, the distribution of the mean values of the time delays of secondary particles relative to the first particle hitting the ground level in each EAS, <τi > = , and the distribution of their mean standard deviations, < σi > in terms of distance from the shower core are obtained. The mean thickness and profile of showers as a function of their energy, primary mass, and zenith angle is described.

  15. Particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Vranic, M.; Grismayer, T.; Martins, J. L.; Fonseca, R. A.; Silva, L. O.

    2015-06-01

    Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles ought to increase exponentially in the simulation box, the dynamical merging permits a considerable speedup, and significant memory savings that otherwise would make the simulations impossible to perform.

  16. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S.; Chang, C. S.; Hager, R.

    Here, a fast edge turbulence suppression event has been simulated in the electrostatic version of the gyrokinetic particle-in-cell code XGC1 in a realistic diverted tokamak edge geometry under neutral particle recycling. The results show that the sequence of turbulent Reynolds stress followed by neoclassical ion orbit-loss driven together conspire to form the sustaining radial electric field shear and to quench turbulent transport just inside the last closed magnetic flux surface. As a result, the main suppression action is located in a thin radial layer around ψ N≃0.96–0.98, where ψ N is the normalized poloidal flux, with the time scale ~0.1more » ms.« less

  18. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  19. Computing Fourier integral operators with caustics

    NASA Astrophysics Data System (ADS)

    Caday, Peter

    2016-12-01

    Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.

  20. Comparative cytological study of four species in the genera Holomastigotes and Uteronympha n. comb. (Holomastigotidae, Parabasalia), symbiotic flagellates of termites.

    PubMed

    Brugerolle, Guy

    2006-01-01

    Cytological features observed using light, immunofluorescence, and electron microscopy of the type species Holomastigotes elongatum were compared with Holomastigotes lanceolata and to Holomastigotes flexuosum n. sp. The comparison was extended to Spirotrichonymphella pudibunda and to Uteronympha africana n. gen. n. sp., in order to present the common features of the Holomastigotidae (Spirotrichonymphida). All these species have anterior basal bodies bearing microfibrillar or striated rootlets that are reduced or absent posterior to the nucleus. An axostylar trunk is present in Holomastigotes elongatum and Holomastigotes lanceolata, whereas the axostylar microtubules do not extend posterior to the nucleus in Holomastigotes flexuosum, Spirotrichonymphella, and Uteronympha. Uteronympha africana has specific features, such as a transverse plaque inside the columella from which arise microtubules capping the nucleus, and as in Spirotrichonympha the striated lamina is present all along the flagellar lines. Uteronympha africana has ability to endocytose wood particles in addition to the osmotrophic feeding that occurs in all the Holomastigotidae.

  1. Addition and Removal Energies via the In-Medium Similarity Renormalization Group Method

    NASA Astrophysics Data System (ADS)

    Yuan, Fei

    The in-medium similarity renormalization group (IM-SRG) is an ab initio many-body method suitable for systems with moderate numbers of particles due to its polynomial scaling in computational cost. The formalism is highly flexible and admits a variety of modifications that extend its utility beyond the original goal of computing ground state energies of closed-shell systems. In this work, we present an extension of IM-SRG through quasidegenerate perturbation theory (QDPT) to compute addition and removal energies (single particle energies) near the Fermi level at low computational cost. This expands the range of systems that can be studied from closed-shell ones to nearby systems that differ by one particle. The method is applied to circular quantum dot systems and nuclei, and compared against other methods including equations-of-motion (EOM) IM-SRG and EOM coupled-cluster (CC) theory. The results are in good agreement for most cases. As part of this work, we present an open-source implementation of our flexible and easy-to-use J-scheme framework as well as the HF, IM-SRG, and QDPT codes built upon this framework. We include an overview of the overall structure, the implementation details, and strategies for maintaining high code quality and efficiency. Lastly, we also present a graphical application for manipulation of angular momentum coupling coefficients through a diagrammatic notation for angular momenta (Jucys diagrams). The tool enables rapid derivations of equations involving angular momentum coupling--such as in J-scheme--and significantly reduces the risk of human errors.

  2. Simulation of ultra-high energy photon propagation in the geomagnetic field

    NASA Astrophysics Data System (ADS)

    Homola, P.; Góra, D.; Heck, D.; Klages, H.; PeĶala, J.; Risse, M.; Wilczyńska, B.; Wilczyński, H.

    2005-12-01

    The identification of primary photons or specifying stringent limits on the photon flux is of major importance for understanding the origin of ultra-high energy (UHE) cosmic rays. UHE photons can initiate particle cascades in the geomagnetic field, which leads to significant changes in the subsequent atmospheric shower development. We present a Monte Carlo program allowing detailed studies of conversion and cascading of UHE photons in the geomagnetic field. The program named PRESHOWER can be used both as an independent tool or together with a shower simulation code. With the stand-alone version of the code it is possible to investigate various properties of the particle cascade induced by UHE photons interacting in the Earth's magnetic field before entering the Earth's atmosphere. Combining this program with an extensive air shower simulation code such as CORSIKA offers the possibility of investigating signatures of photon-initiated showers. In particular, features can be studied that help to discern such showers from the ones induced by hadrons. As an illustration, calculations for the conditions of the southern part of the Pierre Auger Observatory are presented. Catalogue identifier:ADWG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG Program obtainable: CPC Program Library, Quen's University of Belfast, N. Ireland Computer on which the program has been thoroughly tested:Intel-Pentium based PC Operating system:Linux, DEC-Unix Programming language used:C, FORTRAN 77 Memory required to execute with typical data:<100 kB No. of bits in a word:32 Has the code been vectorized?:no Number of lines in distributed program, including test data, etc.:2567 Number of bytes in distributed program, including test data, etc.:25 690 Distribution format:tar.gz Other procedures used in PRESHOWER:IGRF [N.A. Tsyganenko, National Space Science Data Center, NASA GSFC, Greenbelt, MD 20771, USA, http://nssdc.gsfc.nasa.gov/space/model/magnetos/data-based/geopack.html], bessik, ran2 [Numerical Recipes, http://www.nr.com]. Nature of the physical problem:Simulation of a cascade of particles initiated by UHE photon passing through the geomagnetic field above the Earth's atmosphere. Method of solution: The primary photon is tracked until its conversion into ee pair or until it reaches the upper atmosphere. If conversion occurred each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). The procedure ends at the top of atmosphere and the shower particle data are saved. Restrictions on the complexity of the problem: Gamma conversion into particles other than electron pair has not been taken into account. Typical running time: 100 preshower events with primary energy 10 eV require a 800 MHz CPU time of about 50 min, with 10 eV the simulation time for 100 events grows up to 500 min.

  3. Numerical investigation of non-perturbative kinetic effects of energetic particles on toroidicity-induced Alfvén eigenmodes in tokamaks and stellarators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaby, Christoph; Könies, Axel; Kleiber, Ralf

    2016-09-15

    The resonant interaction of shear Alfvén waves with energetic particles is investigated numerically in tokamak and stellarator geometry using a non-perturbative MHD-kinetic hybrid approach. The focus lies on toroidicity-induced Alfvén eigenmodes (TAEs), which are most easily destabilized by a fast-particle population in fusion plasmas. While the background plasma is treated within the framework of an ideal-MHD theory, the drive of the fast particles, as well as Landau damping of the background plasma, is modelled using the drift-kinetic Vlasov equation without collisions. Building on analytical theory, a fast numerical tool, STAE-K, has been developed to solve the resulting eigenvalue problem usingmore » a Riccati shooting method. The code, which can be used for parameter scans, is applied to tokamaks and the stellarator Wendelstein 7-X. High energetic-ion pressure leads to large growth rates of the TAEs and to their conversion into kinetically modified TAEs and kinetic Alfvén waves via continuum interaction. To better understand the physics of this conversion mechanism, the connections between TAEs and the shear Alfvén wave continuum are examined. It is shown that, when energetic particles are present, the continuum deforms substantially and the TAE frequency can leave the continuum gap. The interaction of the TAE with the continuum leads to singularities in the eigenfunctions. To further advance the physical model and also to eliminate the MHD continuum together with the singularities in the eigenfunctions, a fourth-order term connected to radiative damping has been included. The radiative damping term is connected to non-ideal effects of the bulk plasma and introduces higher-order derivatives to the model. Thus, it has the potential to substantially change the nature of the solution. For the first time, the fast-particle drive, Landau damping, continuum damping, and radiative damping have been modelled together in tokamak- as well as in stellarator geometry.« less

  4. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  5. Bio-Fluid Transport Models Through Nano and Micro-Fluidic Components

    DTIC Science & Technology

    2005-08-01

    nm of the wall in steady electroosmotic flow with good accuracy. The nPIV data were in excellent agreement with the model predictions for monovalent...first experimental probe inside the electric double layer in electroosmotic flow of an aqueous electrolyte solution. 15. NUMBER OF PAGES 225 14...SUBJECT TERMS Micro And Nanofluidics, Electroosmotic Flow, Nano Particle Image Velocimetry 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT

  6. DYNECHARM++: a toolkit to simulate coherent interactions of high-energy charged particles in complex structures

    NASA Astrophysics Data System (ADS)

    Bagli, Enrico; Guidi, Vincenzo

    2013-08-01

    A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.

  7. Cosmological neutrino simulations at extreme scale

    DOE PAGES

    Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...

    2017-08-01

    Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less

  8. Development and validation of a critical gradient energetic particle driven Alfven eigenmode transport model for DIII-D tilted neutral beam experiments

    DOE PAGES

    Waltz, Ronald E.; Bass, Eric M.; Heidbrink, William W.; ...

    2015-10-30

    Recent experiments with the DIII-D tilted neutral beam injection (NBI) varying the beam energetic particle (EP) source profiles have provided strong evidence that unstable Alfven eigenmodes (AE) drive stiff EP transport at a critical EP density gradient. Here the critical gradient is identified by the local AE growth rate being equal to the local ITG/TEM growth rate at the same low toroidal mode number. The growth rates are taken from the gyrokinetic code GYRO. Simulation show that the slowing down beam-like EP distribution has a slightly lower critical gradient than the Maxwellian. The ALPHA EP density transport code, used tomore » validate the model, combines the low-n stiff EP critical density gradient AE mid-core transport with the energy independent high-n ITG/TEM density transport model controling the central core EP density profile. For the on-axis NBI heated DIII-D shot 146102, while the net loss to the edge is small, about half the birth fast ions are transported from the central core r/a < 0.5 and the central density is about half the slowing down density. Lastly, these results are in good agreement with experimental fast ion pressure profiles inferred from MSE constrained EFIT equilibria.« less

  9. Code subspaces for LLM geometries

    NASA Astrophysics Data System (ADS)

    Berenstein, David; Miller, Alexandra

    2018-03-01

    We consider effective field theory around classical background geometries with a gauge theory dual, specifically those in the class of LLM geometries. These are dual to half-BPS states of N= 4 SYM. We find that the language of code subspaces is natural for discussing the set of nearby states, which are built by acting with effective fields on these backgrounds. This work extends our previous work by going beyond the strict infinite N limit. We further discuss how one can extract the topology of the state beyond N→∞ and find that, as before, uncertainty and entanglement entropy calculations provide a useful tool to do so. Finally, we discuss obstructions to writing down a globally defined metric operator. We find that the answer depends on the choice of reference state that one starts with. Therefore, within this setup, there is ambiguity in trying to write an operator that describes the metric globally.

  10. Sigma 1 protein of mammalian reoviruses extends from the surfaces of viral particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furlong, D.B.; Nibert, M.L.; Fields, B.N.

    1988-01-01

    Electron microscopy revealed structures consisting of long fibers topped with knobs extending from the surfaces of virions of mammalian reoviruses. The morphology of these structures was reminiscent of the fiber protein of adenovirus. Fibers were also seen extending from the reovirus top component and intermediate subviral particles but not from cores, suggesting that the fibers consist of either the ..mu..1C or sigma1 outer capsid protein. Amino acid sequence analysis predicts that the reovirus cell attachment protein sigma1 contains an extended fiber domain. When sigma1 protein was released from viral particles with mild heat and subsequently obtained in isolation, it wasmore » found to have a morphology identical to that of the fiber structures seen extending from the viral particles. The identification of an extended form of sigma1 has important implications for its function in cell attachment. Other evidence suggest that sigma1 protein may occur in virions in both an extended and an unextended state.« less

  11. MHD modeling of a DIII-D low-torque QH-mode discharge and comparison to observations

    NASA Astrophysics Data System (ADS)

    King, J. R.; Kruger, S. E.; Burrell, K. H.; Chen, X.; Garofalo, A. M.; Groebner, R. J.; Olofsson, K. E. J.; Pankin, A. Y.; Snyder, P. B.

    2017-05-01

    Extended-MHD modeling of DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] quiescent H-mode (QH-mode) discharges with nonlinear NIMROD [C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)] simulations saturates into a turbulent state but does not saturate when the steady-state flow inferred from measurements is not included. This is consistent with the experimental observations of the quiescent regime on DIII-D. The simulation with flow develops into a saturated turbulent state where the nϕ=1 and 2 toroidal modes become dominant through an inverse cascade. Each mode in the range of nϕ=1 -5 is dominant at a different time. Consistent with experimental observations during QH-mode, the simulated state leads to large particle transport relative to the thermal transport. Analysis shows that the amplitude and phase of the density and temperature perturbations differ resulting in greater fluctuation-induced convective particle transport relative to the convective thermal transport. Comparison to magnetic-coil measurements shows that rotation frequencies differ between the simulation and experiment, which indicates that more sophisticated extended-MHD two-fluid modeling is required.

  12. Solving large scale structure in ten easy steps with COLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less

  13. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  14. In situ Orbit Extraction from Live, High Precision Collisionless Simulations of Systems Formed by Cold Collapse

    NASA Astrophysics Data System (ADS)

    Noriega-Mendoza, H.; Aguilar, L. A.

    2018-04-01

    We performed high precision, N-body simulations of the cold collapse of initially spherical, collisionless systems using the GYRFALCON code of Dehnen (2000). The collapses produce very prolate spheroidal configurations. After the collapse, the systems are simulated for 85 and 170 half-mass radius dynamical timescales, during which energy conservation is better than 0.005%. We use this period to extract individual particle orbits directly from the simulations. We then use the TAXON code of Carpintero and Aguilar (1998) to classify 1 to 1.5% of the extracted orbits from our final, relaxed configurations: less than 15% are chaotic orbits, 30% are box orbits and 60% are tube orbits (long and short axis). Our goal has been to prove that direct orbit extraction is feasible, and that there is no need to "freeze" the final N-body system configuration to extract a time-independent potential.

  15. Two species drag/diffusion model for energetic particle driven modes

    NASA Astrophysics Data System (ADS)

    Aslanyan, V.; Sharapov, S. E.; Spong, D. A.; Porkolab, M.

    2017-12-01

    A nonlinear bump-on-tail model for the growth and saturation of energetic particle driven plasma waves has been extended to include two populations of fast particles—one dominated by dynamical friction at the resonance and the other by velocity space diffusion. The resulting temporal evolution of the wave amplitude and frequency depends on the relative weight of the two populations. The two species model is applied to burning plasma with drag-dominated alpha particles and diffusion-dominated ICRH accelerated minority ions, showing the stabilization of bursting modes. The model also suggests an explanation for the recent observations on the TJ-II stellarator, where Alfvén Eigenmodes transition between steady state and bursting as the magnetic configuration varied.

  16. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    DTIC Science & Technology

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of

  17. MeV-scale sterile neutrino decays at the Fermilab Short-Baseline Neutrino program

    NASA Astrophysics Data System (ADS)

    Ballett, Peter; Pascoli, Silvia; Ross-Lonergan, Mark

    2017-04-01

    Nearly-sterile neutrinos with masses in the MeV range and below would be produced in the beam of the Short-Baseline Neutrino (SBN) program at Fermilab. In this article, we study the potential for SBN to discover these particles through their subsequent decays in its detectors. We discuss the decays which will be visible at SBN in a minimal and non-minimal extension of the Standard Model, and perform simulations to compute the parameter space constraints which could be placed in the absence of a signal. We demonstrate that the SBN programme can extend existing bounds on well constrained channels such as N → ν l + l - and N → l ± π ∓ while, thanks to the strong particle identification capabilities of liquid-Argon technology, also place bounds on often neglected channels such as N → νγ and N → νπ 0. Furthermore, we consider the phenomenological impact of improved event timing information at the three detectors. As well as considering its role in background reduction, we note that if the light-detection systems in SBND and ICARUS can achieve nanosecond timing resolution, the effect of finite sterile neutrino mass could be directly observable, providing a smoking-gun signature for this class of models. We stress throughout that the search for heavy nearly-sterile neutrinos is a complementary new physics analysis to the search for eV-scale oscillations, and would extend the BSM programme of SBN while requiring no beam or detector modifications.

  18. A Novel Approach to Visualizing Dark Matter Simulations.

    PubMed

    Kaehler, R; Hahn, O; Abel, T

    2012-12-01

    In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.

  19. Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.

    2008-01-01

    Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.

  20. The Dynamics of Dense Planetary Rings.

    NASA Astrophysics Data System (ADS)

    Mosqueira, Ignacio

    1995-01-01

    We study the dynamics of a two-mode narrow ring in the case that one of the modes dominates the overall ring perturbation. We use a simple two-streamline self -gravity model, including viscosity, and shepherd satellites. As might be expected, we find that n m = 1 mode appears to be a natural end state for the rings, inasmuch as the presence of a dominant eccentric mode inhibits the growth of other modes, but the reverse is not true. Why some rings exhibit other m values only remains unexplained. Using a modified N-body code to include periodic boundary conditions in a perturbed shear flow, we investigate the role of viscosity on the dynamics of perturbed rings with optical depth tau ~ 1. In particular, we are concerned with rings such that qe = a{de over da} ne 0, where a is the semi-major axis and e is the eccentricity. We confirm the possibility that, for a sufficiently perturbed ring, the angular momentum luminosity may reverse direction with respect to the unperturbed ring (Borderies et al. 1983a). We use observationally constrained parameters for the delta and epsilon Uranian rings, as well as the outer portion of Saturn's B ring. We find that understanding the effects of viscosity for the Uranian rings requires that both local and non-local transport terms be considered if the coefficient of restitution experimentally obtained by Bridges et al. (1984) is appropriate for ring particles. We also find evidence that the criterion for viscous overstability is satisfied in the case of high optical depth rings, as originally proposed by Borderies et al. (1985), making viscous overstability a leading candidate mechanism to explain the non-axisymmetric structure present in the outer portion of Saturn's B ring. To better understand our path-code results we extend a non-local and incompressible fluid model used by Borderies et al. (1985) for dense rings. We incorporate local and non-local transport terms as well as compressibility, while retaining the same number of arbitrary model parameters.

  1. FlexibleSUSY-A spectrum generator generator for supersymmetric models

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Park, Jae-hyeon; Stöckinger, Dominik; Voigt, Alexander

    2015-05-01

    We introduce FlexibleSUSY, a Mathematica and C++ package, which generates a fast, precise C++ spectrum generator for any SUSY model specified by the user. The generated code is designed with both speed and modularity in mind, making it easy to adapt and extend with new features. The model is specified by supplying the superpotential, gauge structure and particle content in a SARAH model file; specific boundary conditions e.g. at the GUT, weak or intermediate scales are defined in a separate FlexibleSUSY model file. From these model files, FlexibleSUSY generates C++ code for self-energies, tadpole corrections, renormalization group equations (RGEs) and electroweak symmetry breaking (EWSB) conditions and combines them with numerical routines for solving the RGEs and EWSB conditions simultaneously. The resulting spectrum generator is then able to solve for the spectrum of the model, including loop-corrected pole masses, consistent with user specified boundary conditions. The modular structure of the generated code allows for individual components to be replaced with an alternative if available. FlexibleSUSY has been carefully designed to grow as alternative solvers and calculators are added. Predefined models include the MSSM, NMSSM, E6SSM, USSM, R-symmetric models and models with right-handed neutrinos.

  2. Sandia Simple Particle Tracking (Sandia SPT) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen M.

    2015-06-15

    Sandia SPT is designed as software to accompany a book chapter being published a methods chapter which provides an introduction on how to label and track individual proteins. The Sandia Simple Particle Tracking code uses techniques common to the image processing community, where its value is that it facilitates implementing the methods described in the book chapter by providing the necessary open-source code. The code performs single particle spot detection (or segmentation and localization) followed by tracking (or connecting the detected particles into trajectories). The book chapter, which along with the headers in each file, constitutes the documentation for themore » code is: Anthony, S.M.; Carroll-Portillo, A.; Timlon, J.A., Dynamics and Interactions of Individual Proteins in the Membrane of Living Cells. In Anup K. Singh (Ed.) Single Cell Protein Analysis Methods in Molecular Biology. Springer« less

  3. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  4. Control and formation mechanism of extended nanochannel geometry in colloidal mesoporous silica particles.

    PubMed

    Sokolov, I; Kalaparthi, V; Volkov, D O; Palantavida, S; Mordvinova, N E; Lebedev, O I; Owens, J

    2017-01-04

    A large class of colloidal multi-micron mesoporous silica particles have well-defined cylindrical nanopores, nanochannels which self-assembled in the templated sol-gel process. These particles are of broad interest in photonics, for timed drug release, enzyme stabilization, separation and filtration technologies, catalysis, etc. Although the pore geometry and mechanism of pore formation of such particles has been widely investigated at the nanoscale, their pore geometry and its formation mechanism at a larger (extended) scale is still under debate. The extended geometry of nanochannels is paramount for all aforementioned applications because it defines accessibility of nanochannels, and subsequently, kinetics of interaction of the nanochannel content with the particle surrounding. Here we present both experimental and theoretical investigation of the extended geometry and its formation mechanism in colloidal multi-micron mesoporous silica particles. We demonstrate that disordered (and consequently, well accessible) nanochannels in the initially formed colloidal particles gradually align and form extended self-sealed channels. This knowledge allows to control the percentage of disordered versus self-sealed nanochannels, which defines accessibility of nanochannels in such particles. We further show that the observed aligning the channels is in agreement with theory; it is thermodynamically favored as it decreases the Gibbs free energy of the particles. Besides the practical use of the obtained results, developing a fundamental understanding of the mechanisms of morphogenesis of complex geometry of nanopores will open doors to efficient and controllable synthesis that will, in turn, further fuel the practical utilization of these particles.

  5. Evaluation of nuclear reaction cross section data for the production of (87)Y and (88)Y via proton, deuteron and alpha-particle induced transmutations.

    PubMed

    Zaneb, H; Hussain, M; Amjad, N; Qaim, S M

    2016-06-01

    Proton, deuteron and alpha-particle induced reactions on (87,88)Sr, (nat)Zr and (85)Rb targets were evaluated for the production of (87,88)Y. The literature data were compared with nuclear model calculations using the codes ALICE-IPPE, TALYS 1.6 and EMPIRE 3.2. The evaluated cross sections were generated; therefrom thick target yields of (87,88)Y were calculated. Analysis of radio-yttrium impurities and yield showed that the (87)Sr(p, n)(87)Y and (88)Sr(p, n)(88)Y reactions are the best routes for the production of (87)Y and (88)Y respectively. The calculated yield for the (87)Sr(p, n)(87)Y reaction is 104 MBq/μAh in the energy range of 14→2.7MeV. Similarly, the calculated yield for the (88)Sr(p, n)(88)Y reaction is 3.2 MBq/μAh in the energy range of 15→7MeV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    NASA Astrophysics Data System (ADS)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  7. Modeling Giant Sawtooth Modes in DIII-D using the NIMROD code

    NASA Astrophysics Data System (ADS)

    Kruger, Scott; Jenkins, Thomas; Held, Eric; King, Jacob; NIMROD Team

    2014-10-01

    Ongoing efforts to model giant sawtooth cycles in DIII-D shot 96043 using NIMROD are summarized. In this discharge, an energetic ion population induced by RF heating modifies the sawtooth stability boundary, supplanting the conventional sawtooth cycle with longer-period giant sawtooth oscillations of much larger amplitude. NIMROD has the unique capability of being able to use both continuum kinetic and particle-in-cell numerical schemes to model the RF-induced hot-particle distribution effects on the sawtooth stability. This capability is used to numerically investigate the role played by the form of the energetic particle distribution, including a possible high-energy tail drawn out by the RF, to study the sawtooth threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior (such as the plasma Lundquist number and hot-particle β-fraction) are also considered. Ultimately, we hope to assess the degree to which NIMROD's extended MHD model correctly simulates the observed linear onset and nonlinear behavior of the giant sawtooth, and to establish its reliability as a predictive modeling tool for these modes. This work was initiated by the late Dr. Dalton Schnack. Equilibria were provided by Dr. A. Turnbull of General Atomics.

  8. The dynamics of stellar discs in live dark-matter haloes

    NASA Astrophysics Data System (ADS)

    Fujii, M. S.; Bédorf, J.; Baba, J.; Portegies Zwart, S.

    2018-06-01

    Recent developments in computer hardware and software enable researchers to simulate the self-gravitating evolution of galaxies at a resolution comparable to the actual number of stars. Here we present the results of a series of such simulations. We performed N-body simulations of disc galaxies with between 100 and 500 million particles over a wide range of initial conditions. Our calculations include a live bulge, disc, and dark-matter halo, each of which is represented by self-gravitating particles in the N-body code. The simulations are performed using the gravitational N-body tree-code BONSAI running on the Piz Daint supercomputer. We find that the time-scale over which the bar forms increases exponentially with decreasing disc-mass fraction and that the bar formation epoch exceeds a Hubble time when the disc-mass fraction is ˜0.35. These results can be explained with the swing-amplification theory. The condition for the formation of m = 2 spirals is consistent with that for the formation of the bar, which is also an m = 2 phenomenon. We further argue that the non-barred grand-design spiral galaxies are transitional, and that they evolve to barred galaxies on a dynamical time-scale. We also confirm that the disc-mass fraction and shear rate are important parameters for the morphology of disc galaxies. The former affects the number of spiral arms and the bar formation epoch, and the latter determines the pitch angle of the spiral arms.

  9. IMPETUS: Consistent SPH calculations of 3D spherical Bondi accretion onto a black hole

    NASA Astrophysics Data System (ADS)

    Ramírez-Velasquez, J. M.; Sigalotti, L. Di G.; Gabbasov, R.; Cruz, F.; Klapp, J.

    2018-04-01

    We present three-dimensional calculations of spherically symmetric Bondi accretion onto a stationary supermassive black hole (SMBH) of mass 108M⊙ within a radial range of 0.02 - 10 pc, using a modified version of the smoothed particle hydrodynamics (SPH) GADGET-2 code, which ensures approximate first-order consistency (i.e., second-order accuracy) for the particle approximation. First-order consistency is restored by allowing the number of neighbours, nneigh, and the smoothing length, h, to vary with the total number of particles, N, such that the asymptotic limits nneigh → ∞ and h → 0 hold as N → ∞. The ability of the method to reproduce the isothermal (γ = 1) and adiabatic (γ = 5/3) Bondi accretion is investigated with increased spatial resolution. In particular, for the isothermal models the numerical radial profiles closely match the Bondi solution, except near the accretor, where the density and radial velocity are slightly underestimated. However, as nneigh is increased and h is decreased, the calculations approach first-order consistency and the deviations from the Bondi solution decrease. The density and radial velocity profiles for the adiabatic models are qualitatively similar to those for the isothermal Bondi accretion. Steady-state Bondi accretion is reproduced by the highly resolved consistent models with a percent relative error of ≲ 1% for γ = 1 and ˜9% for γ = 5/3, with the adiabatic accretion taking longer than the isothermal case to reach steady flow. The performance of the method is assessed by comparing the results with those obtained using the standard GADGET-2 and the GIZMO codes.

  10. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  11. Extending the Host Range of Bacteriophage Particles for DNA Transduction.

    PubMed

    Yosef, Ido; Goren, Moran G; Globus, Rea; Molshanski-Mor, Shahar; Qimron, Udi

    2017-06-01

    A major limitation in using bacteriophage-based applications is their narrow host range. Approaches for extending the host range have focused primarily on lytic phages in hosts supporting their propagation rather than approaches for extending the ability of DNA transduction into phage-restrictive hosts. To extend the host range of T7 phage for DNA transduction, we have designed hybrid particles displaying various phage tail/tail fiber proteins. These modular particles were programmed to package and transduce DNA into hosts that restrict T7 phage propagation. We have also developed an innovative generalizable platform that considerably enhances DNA transfer into new hosts by artificially selecting tails that efficiently transduce DNA. In addition, we have demonstrated that the hybrid particles transduce desired DNA into desired hosts. This study thus critically extends and improves the ability of the particles to transduce DNA into novel phage-restrictive hosts, providing a platform for myriad applications that require this ability. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Geometry creation for MCNP by Sabrina and XSM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Riper, K.A.

    The Monte Carlo N-Particle transport code MCNP is based on a surface description of 3-dimensional geometry. Cells are defined in terms of boolean operations on signed quadratic surfaces. MCNP geometry is entered as a card image file containing coefficients of the surface equations and a list of surfaces and operators describing cells. Several programs are available to assist in creation of the geometry specification, among them Sabrina and the new ``Smart Editor`` code XSM. We briefly describe geometry creation in Sabrina and then discuss XSM in detail. XSM is under development; our discussion is based on the state of XSMmore » as of January 1, 1994.« less

  13. CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies

    NASA Astrophysics Data System (ADS)

    Delzanno, G.; Camporeale, E.; Moulton, J. D.; Borovsky, J. E.; MacDonald, E.; Thomsen, M. F.

    2012-12-01

    We present a recently developed Particle-In-Cell (PIC) code in curvilinear geometry called CPIC (Curvilinear PIC) [1], where the standard PIC algorithm is coupled with a grid generation/adaptation strategy. Through the grid generator, which maps the physical domain to a logical domain where the grid is uniform and Cartesian, the code can simulate domains of arbitrary complexity, including the interaction of complex objects with a plasma. At present the code is electrostatic. Poisson's equation (in logical space) can be solved with either an iterative method based on the Conjugate Gradient (CG) or the Generalized Minimal Residual (GMRES) coupled with a multigrid solver used as a preconditioner, or directly with multigrid. The multigrid strategy is critical for the solver to perform optimally or nearly optimally as the dimension of the problem increases. CPIC also features a hybrid particle mover, where the computational particles are characterized by position in logical space and velocity in physical space. The advantage of a hybrid mover, as opposed to more conventional movers that move particles directly in the physical space, is that the interpolation of the particles in logical space is straightforward and computationally inexpensive, since one does not have to track the position of the particle. We will present our latest progress on the development of the code and document the code performance on standard plasma-physics tests. Then we will present the (preliminary) application of the code to a basic dynamic-charging problem, namely the charging and shielding of a spherical spacecraft in a magnetized plasma for various level of magnetization and including the pulsed emission of an electron beam from the spacecraft. The dynamical evolution of the sheath and the time-dependent current collection will be described. This study is in support of the ConnEx mission concept to use an electron beam from a magnetospheric spacecraft to trace magnetic field lines from the magnetosphere to the ionosphere [2]. [1] G.L. Delzanno, E. Camporeale, "CPIC: a new Particle-in-Cell code for plasma-material interaction studies", in preparation (2012). [2] J.E. Borovsky, D.J. McComas, M.F. Thomsen, J.L. Burch, J. Cravens, C.J. Pollock, T.E. Moore, and S.B. Mende, "Magnetosphere-Ionosphere Observatory (MIO): A multisatellite mission designed to solve the problem of what generates auroral arcs," Eos. Trans. Amer. Geophys. Union 79 (45), F744 (2000).

  14. Particle In Cell Codes on Highly Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  15. Further Studies of the NRL Collective Particle Accelerator VIA Numerical Modeling with the MAGIC Code.

    DTIC Science & Technology

    1984-08-01

    COLLFCTIVF PAPTTCLE ACCELERATOR VIA NUMERICAL MODFLINC WITH THF MAGIC CODE Robert 1. Darker Auqust 19F4 Final Report for Period I April. qI84 - 30...NUMERICAL MODELING WITH THE MAGIC CODE Robert 3. Barker August 1984 Final Report for Period 1 April 1984 - 30 September 1984 Prepared for: Scientific...Collective Final Report Particle Accelerator VIA Numerical Modeling with April 1 - September-30, 1984 MAGIC Code. 6. PERFORMING ORG. REPORT NUMBER MRC/WDC-R

  16. ResBos2: Precision Resummation for the LHC ERA

    NASA Astrophysics Data System (ADS)

    Isaacson, Joshua Paul

    With the precision of data at the LHC, it is important to advance theoretical calculations to match it. Previously, the ResBos code was insufficient to adequately describe the data at the LHC. This requires an advancement in the ResBos code, and led to the development of the ResBos2 package. This thesis discusses some of the major improvements that were implemented into the code to advance it and prepare it for the precision of the LHC. The resummation for color singlet particles is improved from approximate NNLL+NLO accuracy to an accuracy of N3LL+NNLO accuracy. The ResBos2 code is validated against the calculation of the total cross-section for Drell-Yan processes against fixed order calculations, to ensure that the calculations are performed correctly. This allows for a prediction of the transverse momentum and φ*eta distributions for the Z boson to be consistent with the data from ATLAS at a collider energy of √s = 8 TeV. Also, the effects of choice of resummation scheme are investigated for the Collins-Soper-Sterman and Catani-deFlorian-Grazzini formalisms. It is shown that as long as the calculation of each of these is performed such that the order of the B coefficient is exactly 1 order higher than that of the C and H coefficients, then the two formalisms are consistent. Additionally, using the improved theoretical prediction will help to reduce the theoretical uncertainty on the mass of the W boson, by reducing the uncertainty in extrapolating the dsigma/dpTW distribution from the data for the dsigma/dpT Z distribution by taking the ratio of the theory predictions for the Z and W transverse momentum. In addition to improving the accuracy of the color singlet final state resummation calculations, the ResBos2 code introduces the resummation of non-color singlet states in the final state. Here the details for the Higgs plus jet calculation are illustrated as an example of one such process. It is shown that it is possible to perform this resummation, but the resummation formalism needs to be modified in order to do so. The major modification that is made is the inclusion of the jet cone-size dependence in the Sudakov form factor. This result resolves, analytically, the Sudakov shoulder singularity. The results of the ResBos2 prediction are compared to both the fixed order and parton shower calculations. The calculations are shown to be consistent for all of the distributions considered up to the theoretical uncertainty. As the LHC continues to increase their data, and their precision on these observables, the ability to have analytic resummation calculations for non-color singlet final states will provide a strong check of perturbative QCD. Finally, the calculation of the terms needed to match to N3LO are done in this work. Once the results become sufficiently publicly available for the perturbative calculation, the ResBos2 code can easily be extended to include these corrections, and be used as a means to predict the total cross-section at N3LO as well.

  17. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  18. Optimum design and measurement analysis of 0.34 THz extended interaction klystron

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Wang, Jianguo; Xi, Hongzhu; Wang, Dongyang; Wang, Bingbing; Wang, Guangqiang; Teng, Yan

    2018-02-01

    In order to develop an extended interaction klystron (EIK) with high performance in the terahertz range, the staggered-tuned structure is numerically studied, manufactured, and measured. First, the circuit is optimized to get high interaction strength and avoid the mode overlapping in the output cavity, ensuring the efficiency and stability for the device. Then the clustered cavities are staggered tuned to improve its bandwidth. The particle-in-cell (PIC) code is employed to research the performances of the device under different conditions and accordingly the practicable and reliable conditions are confirmed. The device can effectively amplify the input terahertz signal and its gain reaches around 19.6 dB when the working current is 150 mA. The circuit and window are fabricated and tested, whose results demonstrate their usability. The experiment on the beam's transmission is conducted and the results show that about 92% of the emitting current can successfully arrive at the collector, ensuring the validity and feasibility for the interaction process.

  19. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  20. Effect of tidal fields on star clusters

    NASA Technical Reports Server (NTRS)

    Chernoff, David; Weinberg, Martin

    1991-01-01

    We follow the dynamical evolution of a star cluster in a galactic tidal field using a restricted N-body code. We find large asymmetric distortions in the outer profile of the cluster in the first 10 or so crossing times as material is lost. Prograde stars escape preferentially and establish a potentially observable retrograde rotation in the halo. We present the rate of particle loss and compare with the prescription proposed by Lee and Ostriker (1987).

  1. A method for determining electrophoretic and electroosmotic mobilities using AC and DC electric field particle displacements.

    PubMed

    Oddy, M H; Santiago, J G

    2004-01-01

    We have developed a method for measuring the electrophoretic mobility of submicrometer, fluorescently labeled particles and the electroosmotic mobility of a microchannel. We derive explicit expressions for the unknown electrophoretic and the electroosmotic mobilities as a function of particle displacements resulting from alternating current (AC) and direct current (DC) applied electric fields. Images of particle displacements are captured using an epifluorescent microscope and a CCD camera. A custom image-processing code was developed to determine image streak lengths associated with AC measurements, and a custom particle tracking velocimetry (PTV) code was devised to determine DC particle displacements. Statistical analysis was applied to relate mobility estimates to measured particle displacement distributions.

  2. Proceedings of the Scientific Conference on Obscuration and Aerosol Research Held in Aberdeen Proving Ground, Maryland on June 22-25, 1992

    DTIC Science & Technology

    1993-06-01

    Qad and the other, which can be considered due to edges effects, Qcd . 2.2.1 Extended Anomalous Diffraction The anomalous diffraction formula is derived...particle with an array of N point dipoles on a cubic lattice . The polarization of each dipole is found by solving a self- consistent set of linear

  3. Modeling anomalous radial transport in kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2009-11-01

    Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.

  4. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  5. Investigation on the mechanical properties of polyurea (PU)/melamine formaldehyde (MF) microcapsules prepared with different chain extenders.

    PubMed

    Hu, Jianfeng; Zhang, Xiaotong; Qu, Jinqing

    2018-05-02

    There is lack of understanding on controlling of mechanical properties of moisture-curing PU/MF microcapsules which limited its further application. PU/MF microcapsules containing a core of isophorone diisocyanate (IPDI) were prepared with different chain extenders, polyetheramine D400, H 2 O, triethylenetetramine and polyetheramine (PEA) D230 by following a two-step synthesis method in this study. Fourier transform infra-red (FTIR) spectroscopy, Malvern particle sizing, scanning electron microscopy (SEM), and transmission electron microscopy (TEM). And micromanipulation technique was used to identify chemical bonds in the shell, size distributions, structure, thickness, and mechanical properties of microcapsules. The results show that PU/MF microcapsules were successfully prepared. Tr increased from 46.4 ± 13.9 N/m to 75.8 ± 23.3 N/m when extender changed from D400 to D230. And the Tr increased from 51.3 ± 14.1 to 94.8 ± 17.5 N/m when the swelling time increased from 1 to 3h. Morphologies of the shell were utilised to understand the mechanism of reactions in forming the shell materials.

  6. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.

  7. Neutron production cross sections for (d,n) reactions at 55 MeV

    NASA Astrophysics Data System (ADS)

    Wakasa, T.; Goto, S.; Matsuno, M.; Mitsumoto, S.; Okada, T.; Oshiro, H.; Sakaguchi, S.

    2017-08-01

    The cross sections for (d,n) reactions on {}^natC-{}^{197}Au have been measured at a bombarding energy of 55 MeV and a laboratory scattering angle of θ_lab = 9.5°. The angular distributions for the {}^natC(d,n) reaction have also been obtained at θ_lab = 0°-40°. The neutron energy spectra are dominated by deuteron breakup contributions and their peak positions can be reasonably reproduced by considering the Coulomb force effects. The data are compared with the TENDL-2015 nuclear data and Particle and Heavy Ion Transport code System (PHITS) calculations. Both calculations fail to reproduce the measured energy spectra and angular distributions.

  8. 3-D Particle Simulation of Strongly-Coupled Chains of Charged Polymers

    NASA Astrophysics Data System (ADS)

    Tanaka, Toyoichi; Tanaka, Motohiko; Pande, V.; Grosberg, A.

    1996-11-01

    The behaviors of the polyampholyte (PA) which is a connected chain of charged beads (molecules) submerged in the neutral solvent is studied using the 3-D particle simulation code. The major issue is how an equilibrium and kinetics of the PA depend on the thermal and electrostatic forces, i.e., the coupling constant Γ= e^2/aT . We follow a dynamical evolution of the PA, considering the electrostatic force, (2) the binding force between the adjacent beads, (3) the random thermal force exerted by the solvent, and (4) the frictional force: m fracdv_idt = sumj fracZ_iZj e^2 |ri -r_j|^2 hatr_ij - frac3Ta^2(2 ri -r_i+1 -r_i-1) + F^(th) - m ν v_i. Preliminary runs show that, when the excess charge δ N on the chain is larger than N^1/2 ( N : the number of the beads), the size of the polyampholyte increases as in the Monte Carlo simulation using the energy principle [Kantor, Kardar and Li, Phys.Rev.E, 49, 1383 (1994)]. The measured time rate of the size increase for the fixed value of Γ is scaled as, d/dt ~ δ N - N^1/2 for δ N > N^1/2 , and d/dt ~ 0 otherwise.

  9. Implications for Post-processing Nucleosynthesis of Core-collapse Supernova Models with Lagrangian Particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.

    In this paper, we investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking onlymore » $$(\\alpha ,\\gamma )$$ reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles; inconsistent thermodynamic evolution, including misestimation of expansion timescales; and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. Finally, we present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 $${M}_{\\odot }$$ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.« less

  10. Implications for Post-processing Nucleosynthesis of Core-collapse Supernova Models with Lagrangian Particles

    NASA Astrophysics Data System (ADS)

    Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; Lee, C. T.; Lentz, Eric J.; Messer, O. E. Bronson

    2017-07-01

    We investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking only (α ,γ ) reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles inconsistent thermodynamic evolution, including misestimation of expansion timescales and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 {M}⊙ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.

  11. Three-dimensional modeling of the neutral gas depletion effect in a helicon discharge plasma

    NASA Astrophysics Data System (ADS)

    Kollasch, Jeffrey; Schmitz, Oliver; Norval, Ryan; Reiter, Detlev; Sovinec, Carl

    2016-10-01

    Helicon discharges provide an attractive radio-frequency driven regime for plasma, but neutral-particle dynamics present a challenge to extending performance. A neutral gas depletion effect occurs when neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. The Monte Carlo neutral particle tracking code EIRENE was setup for the MARIA helicon experiment at UW Madison to study its neutral particle dynamics. Prescribed plasma temperature and density profiles similar to those in the MARIA device are used in EIRENE to investigate the main causes of the neutral gas depletion effect. The most dominant plasma-neutral interactions are included so far, namely electron impact ionization of neutrals, charge exchange interactions of neutrals with plasma ions, and recycling at the wall. Parameter scans show how the neutral depletion effect depends on parameters such as Knudsen number, plasma density and temperature, and gas-surface interaction accommodation coefficients. Results are compared to similar analytic studies in the low Knudsen number limit. Plans to incorporate a similar Monte Carlo neutral model into a larger helicon modeling framework are discussed. This work is funded by the NSF CAREER Award PHY-1455210.

  12. Implications for Post-processing Nucleosynthesis of Core-collapse Supernova Models with Lagrangian Particles

    DOE PAGES

    Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; ...

    2017-06-26

    In this paper, we investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking onlymore » $$(\\alpha ,\\gamma )$$ reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles; inconsistent thermodynamic evolution, including misestimation of expansion timescales; and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. Finally, we present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 $${M}_{\\odot }$$ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.« less

  13. Large Hadron Collider at CERN: Beams generating high-energy-density matter.

    PubMed

    Tahir, N A; Schmidt, R; Shutov, A; Lomonosov, I V; Piriz, A R; Hoffmann, D H H; Deutsch, C; Fortov, V E

    2009-04-01

    This paper presents numerical simulations that have been carried out to study the thermodynamic and hydrodynamic responses of a solid copper cylindrical target that is facially irradiated along the axis by one of the two Large Hadron Collider (LHC) 7 TeV/ c proton beams. The energy deposition by protons in solid copper has been calculated using an established particle interaction and Monte Carlo code, FLUKA, which is capable of simulating all components of the particle cascades in matter, up to multi-TeV energies. These data have been used as input to a sophisticated two-dimensional hydrodynamic computer code BIG2 that has been employed to study this problem. The prime purpose of these investigations was to assess the damage caused to the equipment if the entire LHC beam is lost at a single place. The FLUKA calculations show that the energy of protons will be deposited in solid copper within about 1 m assuming constant material parameters. Nevertheless, our hydrodynamic simulations have shown that the energy deposition region will extend to a length of about 35 m over the beam duration. This is due to the fact that first few tens of bunches deposit sufficient energy that leads to high pressure that generates an outgoing radial shock wave. Shock propagation leads to continuous reduction in the density at the target center that allows the protons delivered in subsequent bunches to penetrate deeper and deeper into the target. This phenomenon has also been seen in case of heavy-ion heated targets [N. A. Tahir, A. Kozyreva, P. Spiller, D. H. H. Hoffmann, and A. Shutov, Phys. Rev. E 63, 036407 (2001)]. This effect needs to be considered in the design of a sacrificial beam stopper. These simulations have also shown that the target is severely damaged and is converted into a huge sample of high-energy density (HED) matter. In fact, the inner part of the target is transformed into a strongly coupled plasma with fairly uniform physical conditions. This work, therefore, has suggested an additional very important application of the LHC, namely, studies of HED states in matter.

  14. Polydisperse particle-driven gravity currents in non-rectangular cross section channels

    NASA Astrophysics Data System (ADS)

    Zemach, T.

    2018-01-01

    We consider a high-Reynolds-number gravity current generated by polydisperse suspension of n types of particles distributed in a fluid of density ρi. Each class of particles in suspension has a different settling velocity. The current propagates along a channel of non-rectangular cross section into an ambient fluid of constant density ρa. The bottom and top of the channel are at z = 0, H, and the cross section is given by the quite general form -f1(z) ≤ y ≤ f2(z) for 0 ≤ z ≤ H. The flow is modeled by the one-layer shallow-water equations obtained for the time-dependent motion. We solve the problem by a finite-difference numerical code to present typical height h, velocity u, and mass fractions of particle (concentrations) (ϕ( j), j = 1, …, n) profiles. The runout length of suspensions in channels of power-law cross sections is analytically predicted using a simplified depth-averaged "box" model. We demonstrate that any degree of polydispersivity adds to the runout length of the currents, relative to that of equivalent monodisperse currents with an average settling velocity. The theoretical predictions are supported by the available experimental data. The present approach is a significant generalization of the particle-driven gravity current problem: on the one hand, now the monodisperse current in non-rectangular channels is a particular case of n = 1. On the other hand, the classical formulation of polydisperse currents for a rectangular channel is now just a particular case, f(z) = const., in the wide domain of cross sections covered by this new model.

  15. A new high transmission inlet for the Caltech nano-RDMA for size distribution measurements of sub-3 nm ions at ambient concentrations

    NASA Astrophysics Data System (ADS)

    Franchin, A.; Downard, A. J.; Kangasluoma, J.; Nieminen, T.; Lehtipalo, K.; Steiner, G.; Manninen, H. E.; Petäjä, T.; Flagan, R. C.; Kulmala, M.

    2015-06-01

    Reliable and reproducible measurements of atmospheric aerosol particle number size distributions below 10 nm require optimized classification instruments with high particle transmission efficiency. Almost all DMAs have an unfavorable potential gradient at the outlet (e.g. long column, Vienna type) or at the inlet (nano-radial DMA). This feature prevents them from achieving a good transmission efficiency for the smallest nanoparticles. We developed a new high transmission inlet for the Caltech nano-radial DMA (nRDMA) that increases the transmission efficiency to 12 % for ions as small as 1.3 nm in mobility equivalent diameter (corresponding to 1.2 × 10-4 m2 V-1 s-1 in electrical mobility). We successfully deployed the nRDMA, equipped with the new inlet, in chamber measurements, using a Particle Size Magnifier (PSM) and a booster Condensation Particle Counter (CPC) as a counter. With this setup, we were able to measure size distributions of ions between 1.3 and 6 nm, corresponding to a mobility range from 1.2 × 10-4 to 5.8 × 10-6 m2 V-1 s-1. The system was modeled, tested in the laboratory and used to measure negative ions at ambient concentrations in the CLOUD 7 measurement campaign at CERN. We achieved a higher size resolution than techniques currently used in field measurements, and maintained a good transmission efficiency at moderate inlet and sheath air flows (2.5 and 30 LPM, respectively). In this paper, by measuring size distribution at high size resolution down to 1.3 nm, we extend the limit of the current technology. The current setup is limited to ion measurements. However, we envision that future research focused on the charging mechanisms could extend the technique to measure neutral aerosol particles as well, so that it will be possible to measure size distributions of ambient aerosols from 1 nm to 1 μm.

  16. A new response matrix for a 6LiI scintillator BSS system

    NASA Astrophysics Data System (ADS)

    Lacerda, M. A. S.; Méndez-Villafañe, R.; Lorente, A.; Ibañez, S.; Gallego, E.; Vega-Carrillo, H. R.

    2017-10-01

    A new response matrix was calculated for a Bonner Sphere Spectrometer (BSS) with a 6 LiI(Eu) scintillator, using the Monte Carlo N-Particle radiation transport code MCNPX. Responses were calculated for 6 spheres and the bare detector, for energies varying from 1.059E(-9) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 221 energy groups. A comparison was done among the responses obtained in this work and other published elsewhere, for the same detector model. The calculated response functions were inserted in the response input file of the MAXED code and used to unfold the total and direct neutron spectra generated by the 241Am-Be source of the Universidad Politécnica de Madrid (UPM). These spectra were compared with those obtained using the same unfolding code with the Mares and Schraube matrix response.

  17. A Short Research Note on Calculating Exact Distribution Functions and Random Sampling for the 3D NFW Profile

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Howlett, Cullan

    2018-06-01

    In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.

  18. Local unitary transformation method for large-scale two-component relativistic calculations. II. Extension to two-electron Coulomb interaction.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2012-10-14

    The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.

  19. Microgels for long-term storage of vitamins for extended spaceflight.

    PubMed

    Schroeder, R

    2018-02-01

    Biocompatible materials that can encapsulate large amounts of nutrients while protecting them from degrading environmental influences are highly desired for extended manned spaceflight. In this study, alkaline-degradable microgels based on poly(N-vinylcaprolactam) (PVCL) were prepared and analysed with their regard to stabilise retinol which acts as a model vitamin (vitamin A 1 ). It was investigated whether the secondary crosslinking of the particles with a polyphenol can prevent the isomerisation of biologically active all-trans retinol to biologically inactive cis-trans retinol. Both loading with retinol and secondary crosslinking of the particles was performed at room temperature to prevent an early degradation of the vitamin. This study showed that PVCL microgels drastically improve the water solubility of hydrophobic retinol. Additionally, it is demonstrated that the highly crosslinked microgel particles in aqueous solution can be utilised to greatly retard the light- and temperature-induced isomerisation process of retinol by a factor of almost 100 compared to pure retinol stored in ethanol. The use of microgels offers various advantages over other drug delivery systems as they exhibit enhanced biocompatibility and superior aqueous solubility. Copyright © 2017 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  20. One-neutron transfer study of Xe 137 and systematics of 13 / 2 1 + and 13 / 2 2 + levels in N = 83 nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reviol, W.; Sarantites, D. G.; Elson, J. M.

    2016-09-08

    Excited states in 137Xe have been studied by using the near-barrier single-neutron transfer reactions 13C( 136Xe, 12C ) 137Xe and 9Be( 136Xe, 8Be ) 137Xe in inverse kinematics.Particle- and particle- coincidence measurements have been performed with the Phoswich Wall and Digital Gammasphere detector arrays. Evidence is found for a 13/2 + 2 level (E = 3137 keV) and for additional high-lying 3/2 – and 5/2 – states. The results are discussed in the framework of realistic shell-model calculations. These calculations are also extended to the 13/2 + 1 and 13/2 + 2 levels in the N = 83 isotonic chain.more » Furthermore, they indicate that there is a need for a value of the neutron 0i 13/2 single-particle energy (E SPE = 2366 keV) lower than the one proposed in the literature. It is also demonstrated that the population patterns of the j = l ± 1/2 single-particle states in 137Xe are different for the two targets used in these measurements and the implications of this effect are addressed.« less

  1. Systematic dimensionality reduction for continuous-time quantum walks of interacting fermions

    NASA Astrophysics Data System (ADS)

    Izaac, J. A.; Wang, J. B.

    2017-09-01

    To extend the continuous-time quantum walk (CTQW) to simulate P distinguishable particles on a graph G composed of N vertices, the Hamiltonian of the system is expanded to act on an NP-dimensional Hilbert space, in effect, simulating the multiparticle CTQW on graph G via a single-particle CTQW propagating on the Cartesian graph product G□P. The properties of the Cartesian graph product have been well studied, and classical simulation of multiparticle CTQWs are common in the literature. However, the above approach is generally applied as is when simulating indistinguishable particles, with the particle statistics then applied to the propagated NP state vector to determine walker probabilities. We address the following question: How can we modify the underlying graph structure G□P in order to simulate multiple interacting fermionic CTQWs with a reduction in the size of the state space? In this paper, we present an algorithm for systematically removing "redundant" and forbidden quantum states from consideration, which provides a significant reduction in the effective dimension of the Hilbert space of the fermionic CTQW. As a result, as the number of interacting fermions in the system increases, the classical computational resources required no longer increases exponentially for fixed N .

  2. Generalized power-spectrum Larmor formula for an extended charged particle embedded in a harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Marengo, Edwin A.; Khodja, Mohamed R.

    2006-09-01

    The nonrelativistic Larmor radiation formula, giving the power radiated by an accelerated charged point particle, is generalized for a spatially extended particle in the context of the classical charged harmonic oscillator. The particle is modeled as a spherically symmetric rigid charge distribution that possesses both translational and spinning degrees of freedom. The power spectrum obtained exhibits a structure that depends on the form factor of the particle, but reduces, in the limit of an infinitesimally small particle and for the charge distributions considered, to Larmor’s familiar result. It is found that for finite-duration small-enough accelerations as well as perpetual uniform accelerations the power spectrum of the spatially extended particle reduces to that of a point particle. It is also found that when the acceleration is violent or the size parameter of the particle is very large compared to the wavelength of the emitted radiation the power spectrum is highly suppressed. Possible applications are discussed.

  3. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Morgan C.

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a selectmore » group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.« less

  4. The bar-halo interaction - I. From fundamental dynamics to revised N-body requirements

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.; Katz, Neal

    2007-02-01

    A galaxy remains near equilibrium for most of its history. Only through resonances can non-axisymmetric features, such as spiral arms and bars, exert torques over large scales and change the overall structure of the galaxy. In this paper, we describe the resonant interaction mechanism in detail, derive explicit criteria for the particle number required to simulate these dynamical processes accurately using N-body simulations, and illustrate them with numerical experiments. To do this, we perform a direct numerical solution of perturbation theory, in short, by solving for each orbit in an ensemble and make detailed comparisons with N-body simulations. The criteria include: sufficient particle coverage in phase space near the resonance and enough particles to minimize gravitational potential fluctuations that will change the dynamics of the resonant encounter. These criteria are general in concept and can be applied to any dynamical interaction. We use the bar-halo interaction as our primary example owing to its technical simplicity and astronomical ubiquity. Some of our more surprising findings are as follows. First, the inner Lindblad like resonance, responsible for coupling the bar to the central halo cusp, requires more than equal-mass particles within the virial radius or inside the bar radius for a Milky Way like bar in a Navarro, Frenk & White profile. Secondly, orbits that linger near the resonance receive more angular momentum than orbits that move through the resonance quickly. Small-scale fluctuations present in state-of-the-art particle-particle simulations can knock orbits out of resonance, preventing them from lingering and, thereby, decrease the torque per orbit. This can be offset by the larger number of orbits affected by the resonance due to the diffusion. However, noise from orbiting substructure remains at least an order of magnitude too small to be of consequence. Applied to N-body simulations, the required particle numbers are sufficiently high for scenarios of interest that apparent convergence in particle number is misleading: the convergence with N may still be in the noise-dominated regime. State-of-the-art simulations are not adequate to follow all aspects of secular evolution driven by the bar-halo interaction. It is not possible to derive particle number requirements that apply to all situations, for example, more subtle interactions may be even more difficult to simulate. Therefore, we present a procedure to test the requirements for individual N-body codes to the actual problem of interest.

  5. Extending Mondrian Memory Protection

    DTIC Science & Technology

    2010-11-01

    a kernel semaphore is locked or unlocked. In addition, we extended the system call interface to receive notifications about user-land locking...operations (such as calls to the mutex and semaphore code provided by the C library). By patching the dynamically loadable GLibC5, we are able to test... semaphores , and spinlocks. RTO-MP-IST-091 10- 9 Extending Mondrian Memory Protection to loading extension plugins. This prevents any untrusted code

  6. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  7. Colour-barcoded magnetic microparticles for multiplexed bioassays.

    PubMed

    Lee, Howon; Kim, Junhoi; Kim, Hyoki; Kim, Jiyun; Kwon, Sunghoon

    2010-09-01

    Encoded particles have a demonstrated value for multiplexed high-throughput bioassays such as drug discovery and clinical diagnostics. In diverse samples, the ability to use a large number of distinct identification codes on assay particles is important to increase throughput. Proper handling schemes are also needed to readout these codes on free-floating probe microparticles. Here we create vivid, free-floating structural coloured particles with multi-axis rotational control using a colour-tunable magnetic material and a new printing method. Our colour-barcoded magnetic microparticles offer a coding capacity easily into the billions with distinct magnetic handling capabilities including active positioning for code readouts and active stirring for improved reaction kinetics in microscale environments. A DNA hybridization assay is done using the colour-barcoded magnetic microparticles to demonstrate multiplexing capabilities.

  8. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  9. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  10. Gravitational Instability of Small Particles in Stratified Dusty Disks

    NASA Astrophysics Data System (ADS)

    Shi, J.; Chiang, E.

    2012-12-01

    Self-gravity is an attractive means of forming the building blocks of planets, a.k.a. the first-generation planetesimals. For ensembles of dust particles to aggregate into self-gravitating, bound structures, they must first collect into regions of extraordinarily high density in circumstellar gas disks. We have modified the ATHENA code to simulate dusty, compressible, self-gravitating flows in a 3D shearing box configuration, working in the limit that dust particles are small enough to be perfectly entrained in gas. We have used our code to determine the critical density thresholds required for disk gas to undergo gravitational collapse. In the strict limit that the stopping times of particles in gas are infinitesimally small, our numerical simulations and analytic calculations reveal that the critical density threshold for gravitational collapse is orders of magnitude above what has been commonly assumed. We discuss how finite but still short stopping times under realistic conditions can lower the threshold to a level that may be attainable. Nonlinear development of gravitational instability in a stratified dusty disk. Shown are volume renderings of dust density for the bottom half of a disk at t=0, 6, 8, and 9 Omega^{-1}. The initial disk first develops shearing density waves. These waves then steep and form long extending filament along the azimuth. These filaments eventually break and form very dense dust clumps. The time evolution of the maximum dust density within the simulation box. Run std32 stands for a standard run which has averaged Toomre's Q=0.5. Qgtrsim 1.0 for the rest runs in the plot (Z1 has twice metallicity than the standard; Q1 has twice Q_g, the Toomre's Q for the gas disk alone; M1 has twice the dust-to-gas ratio than the standard at the midplane; R1 is constructed so that the midplane density exceeds the Roche criterion however the Toomre's Q is above unity.)

  11. Preliminary Study of Electron Emission for Use in the PIC Portion of MAFIA

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.

    2001-01-01

    This memorandum summarizes a study undertaken to apply the program MAFIA to the modeling of an electron gun in a traveling wave tube (TWT). The basic problem is to emit particles from the cathode in the proper manner. The electrons are emitted with the classical Maxwell-Boltzmann (M-B) energy distribution; and for a small patch of emitting surface; the distribution with angle obeys Lambert's law. This states that the current density drops off as the cosine of the angle from the normal. The motivation for the work is to extend the analysis beyond that which has been done using older codes. Some existing programs use the Child-Langmuir, or 3/2 power law, for the description of the gun. This means the current varies as the 3/2 power of the anode voltage. The proportionality constant is termed the perveance of the gun. This is limited, however, since the 3/2 variation is only an approximation. Also, if the cathode is near saturation, the 3/2 law definitely will not hold. In most of the older codes, the electron beam is decomposed into current tubes, which imply laminar flow in the beam; even though experiments show the flow to be turbulent. Also, the proper inclusion of noise in the beam is not possible. These older methods of calculation do, however, give reasonable values for parameters of the electron beam and the overall gun, and these values will be used as the starting point for a more precise particle-in-cell (PIC) calculation. To minimize the time needed for a given computer run, all beams will use the same number of particles in a simulation. This is accomplished by varying the mass and charge of the emitted particles (macroparticles) in a certain manner, to be consistent with the desired beam current.

  12. Global linear gyrokinetic simulation of energetic particle-driven instabilities in the LHD stellarator

    DOE PAGES

    Spong, Donald A.; Holod, Ihor; Todo, Y.; ...

    2017-06-23

    Energetic particles are inherent to toroidal fusion systems and can drive instabilities in the Alfvén frequency range, leading to decreased heating efficiency, high heat fluxes on plasma-facing components, and decreased ignition margin. The applicability of global gyrokinetic simulation methods to macroscopic instabilities has now been demonstrated and it is natural to extend these methods to 3D configurations such as stellarators, tokamaks with 3D coils and reversed field pinch helical states. This has been achieved by coupling the GTC global gyrokinetic PIC model to the VMEC equilibrium model, including 3D effects in the field solvers and particle push. Here, this papermore » demonstrates the application of this new capability to the linearized analysis of Alfvénic instabilities in the LHD stellarator. For normal shear iota profiles, toroidal Alfvén instabilities in the n = 1 and 2 toroidal mode families are unstable with frequencies in the 75 to 110 kHz range. Also, an LHD case with non-monotonic shear is considered, indicating reductions in growth rate for the same energetic particle drive. Finally, since 3D magnetic fields will be present to some extent in all fusion devices, the extension of gyrokinetic models to 3D configurations is an important step for the simulation of future fusion systems.« less

  13. Synchronization of relativistic particles in the hyperbolic Kuramoto model

    NASA Astrophysics Data System (ADS)

    Ritchie, Louis M.; Lohe, M. A.; Williams, Anthony G.

    2018-05-01

    We formulate a noncompact version of the Kuramoto model by replacing the invariance group SO(2) of the plane rotations by the noncompact group SO(1, 1). The N equations of the system are expressed in terms of hyperbolic angles αi and are similar to those of the Kuramoto model, except that the trigonometric functions are replaced by hyperbolic functions. Trajectories are generally unbounded, nevertheless synchronization occurs for any positive couplings κi, arbitrary positive multiplicative parameters λi and arbitrary exponents ωi. There are no critical values for the coupling constants. We measure the onset of synchronization by means of several order and disorder parameters. We show numerically and by means of exact solutions for N = 2 that solutions can develop singularities if the coupling constants are negative, or if the initial values are not suitably restricted. We describe a physical interpretation of the system as a cluster of interacting relativistic particles in 1 + 1 dimensions, subject to linear repulsive forces with space-time trajectories parametrized by the rapidity αi. The trajectories synchronize provided that the particle separations remain predominantly time-like, and the synchronized cluster can be viewed as a bound state of N relativistic particle constituents. We extend the defining equations of the system to higher dimensions by means of vector equations which are covariant with respect to SO(p, q).

  14. Operation of the Airmodus A11 nano Condensation Nucleus Counter at various inlet pressures and various operation temperatures, and design of a new inlet system

    DOE PAGES

    Kangasluoma, Juha; Franchin, Alessandro; Duplissy, Jonahtan; ...

    2016-07-14

    Measuring sub-3 nm particles outside of controlled laboratory conditions is a challenging task, as many of the instruments are operated at their limits and are subject to changing ambient conditions. In this study, we advance the current understanding of the operation of the Airmodus A11 nano Condensation Nucleus Counter (nCNC), which consists of an A10 Particle Size Magnifier (PSM) and an A20 Condensation Particle Counter (CPC). The effect of the inlet line pressure on the measured particle concentration was measured, and two separate regions inside the A10, where supersaturation of working fluid can take place, were identified. The possibility ofmore » varying the lower cut-off diameter of the nCNC was investigated; by scanning the growth tube temperature, the range of the lower cut-off was extended from 1–2.5 to 1–6 nm. Here we present a new inlet system, which allows automated measurement of the background concentration of homogeneously nucleated droplets, minimizes the diffusion losses in the sampling line and is equipped with an electrostatic filter to remove ions smaller than approximately 4.5 nm. Lastly, our view of the guidelines for the optimal use of the Airmodus nCNC is provided.« less

  15. Operation of the Airmodus A11 nano Condensation Nucleus Counter at various inlet pressures and various operation temperatures, and design of a new inlet system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kangasluoma, Juha; Franchin, Alessandro; Duplissy, Jonahtan

    Measuring sub-3 nm particles outside of controlled laboratory conditions is a challenging task, as many of the instruments are operated at their limits and are subject to changing ambient conditions. In this study, we advance the current understanding of the operation of the Airmodus A11 nano Condensation Nucleus Counter (nCNC), which consists of an A10 Particle Size Magnifier (PSM) and an A20 Condensation Particle Counter (CPC). The effect of the inlet line pressure on the measured particle concentration was measured, and two separate regions inside the A10, where supersaturation of working fluid can take place, were identified. The possibility ofmore » varying the lower cut-off diameter of the nCNC was investigated; by scanning the growth tube temperature, the range of the lower cut-off was extended from 1–2.5 to 1–6 nm. Here we present a new inlet system, which allows automated measurement of the background concentration of homogeneously nucleated droplets, minimizes the diffusion losses in the sampling line and is equipped with an electrostatic filter to remove ions smaller than approximately 4.5 nm. Lastly, our view of the guidelines for the optimal use of the Airmodus nCNC is provided.« less

  16. Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2000-01-01

    An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.

  17. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less

  18. Re-accumulation Scenarios Governing Final Global Shapes of Rubble-Pile Asteroids

    NASA Astrophysics Data System (ADS)

    Hestroffer, Daniel; Tanga, P.; Comito, C.; Paolicchi, P.; Walsh, K.; Richardson, D. C.; Cellino, A.

    2009-05-01

    Asteroids, since the formation of the solar system, are known to have experienced catastrophic collisions, which---depending on the impact energy---can produce a major disruption of the parent body and possibly give birth to asteroid families or binaries [1]. We present a general study of the final shape and dynamical state of asteroids produced by the re-accumulation process following a catastrophic disruption. Starting from a cloud of massive particles (mono-disperse spheres) with given density and velocity distributions, we analyse the final shape, spin state, and angular momentum of the system from numerical integration of a N-body gravitational system (code pkdgrav [2]). The re-accumulation process itself is relatively fast, with a dynamical time corresponding to the spin-period of the final body (several hours). The final global shapes---which are described as tri-axial ellipsoids---exhibit slopes consistent with a degree of shear stress sustained by interlocking particles. We point out a few results: -the final shapes are close to those of hydrostatic equilibrium for incompressible fluids, preferably Maclaurin spheroid rather than Jacobi ellipsoids -for bodies closest to the sequence of hydrostatic equilibrium, there is a direct relation between spin, density and outer shape, suggesting that the outer surface is nearly equipotential -the evolution of the shape during the process follows a track along a gradient of potential energy, without necessarily reaching its minimum -the loose random packing of the particles implies low friction angle and hence fluid-like behaviour, which extends the results of [3]. Future steps of our analysis will include feature refinements of the model initial conditions and re-accumulation process, including impact shakings, realistic velocity distributions, and non equal-sized elementary spheres. References [1] Michel P. et al. 2001. Science 294, 1696 [2] Leinhardt Z.M. et al. 2000. Icarus 146, 133 [3] Richardson D.C. et al. 2005. Icarus 173, 349

  19. Integrating Geochemical Reactions with a Particle-Tracking Approach to Simulate Nitrogen Transport and Transformation in Aquifers

    NASA Astrophysics Data System (ADS)

    Cui, Z.; Welty, C.; Maxwell, R. M.

    2011-12-01

    Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.

  20. Nonlinear Diamagnetic Stabilization of Double Tearing Modes in Cylindrical MHD Simulations

    NASA Astrophysics Data System (ADS)

    Abbott, Stephen; Germaschewski, Kai

    2014-10-01

    Double tearing modes (DTMs) may occur in reversed-shear tokamak configurations if two nearby rational surfaces couple and begin reconnecting. During the DTM's nonlinear evolution it can enter an ``explosive'' growth phase leading to complete reconnection, making it a possible driver for off-axis sawtooth crashes. Motivated by similarities between this behavior and that of the m = 1 kink-tearing mode in conventional tokamaks we investigate diamagnetic drifts as a possible DTM stabilization mechanism. We extend our previous linear studies of an m = 2 , n = 1 DTM in cylindrical geometry to the fully nonlinear regime using the MHD code MRC-3D. A pressure gradient similar to observed ITB profiles is used, together with Hall physics, to introduce ω* effects. We find the diamagnetic drifts can have a stabilizing effect on the nonlinear DTM through a combination of large scale differential rotation and mechanisms local to the reconnection layer. MRC-3D is an extended MHD code based on the libMRC computational framework. It supports nonuniform grids in curvilinear coordinates with parallel implicit and explicit time integration.

  1. Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms

    PubMed Central

    Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.

    2013-01-01

    Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162

  2. Coverage Maximization Using Dynamic Taint Tracing

    DTIC Science & Technology

    2007-03-28

    we do not have source code are handled, incompletely, via models of taint transfer. We use a little language to specify how taint transfers across a...n) 2.3.7 Implementation and Runtime Issues The taint graph instrumentation is a 2K line Ocaml module extending CIL and is supported by 5K lines of...modern scripting languages such as Ruby have taint modes that work similarly; however, all propagate taint at the variable rather than the byte level and

  3. High sensitivity, solid state neutron detector

    DOEpatents

    Stradins, Pauls; Branz, Howard M; Wang, Qi; McHugh, Harold R

    2015-05-12

    An apparatus (200) for detecting slow or thermal neutrons (160). The apparatus (200) includes an alpha particle-detecting layer (240) that is a hydrogenated amorphous silicon p-i-n diode structure. The apparatus includes a bottom metal contact (220) and a top metal contact (250) with the diode structure (240) positioned between the two contacts (220, 250) to facilitate detection of alpha particles (170). The apparatus (200) includes a neutron conversion layer (230) formed of a material containing boron-10 isotopes. The top contact (250) is pixilated with each contact pixel extending to or proximate to an edge of the apparatus to facilitate electrical contacting. The contact pixels have elongated bodies to allow them to extend across the apparatus surface (242) with each pixel having a small surface area to match capacitance based upon a current spike detecting circuit or amplifier connected to each pixel. The neutron conversion layer (860) may be deposited on the contact pixels (830) such as with use of inkjet printing of nanoparticle ink.

  4. High sensitivity, solid state neutron detector

    DOEpatents

    Stradins, Pauls; Branz, Howard M.; Wang, Qi; McHugh, Harold R.

    2013-10-29

    An apparatus (200) for detecting slow or thermal neutrons (160) including an alpha particle-detecting layer (240) that is a hydrogenated amorphous silicon p-i-n diode structure. The apparatus includes a bottom metal contact (220) and a top metal contact (250) with the diode structure (240) positioned between the two contacts (220, 250) to facilitate detection of alpha particles (170). The apparatus (200) includes a neutron conversion layer (230) formed of a material containing boron-10 isotopes. The top contact (250) is pixilated with each contact pixel extending to or proximate to an edge of the apparatus to facilitate electrical contacting. The contact pixels have elongated bodies to allow them to extend across the apparatus surface (242) with each pixel having a small surface area to match capacitance based upon a current spike detecting circuit or amplifier connected to each pixel. The neutron conversion layer (860) may be deposited on the contact pixels (830) such as with use of inkjet printing of nanoparticle ink.

  5. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  6. Nuclear Structure of the Closed Subshell Nucleus 90Zr Studied with the (n,n'(gamma)) Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, P E; Younes, Y; Becker, J A

    States in {sup 90}Zr have been observed with the (n,n{prime}{gamma}) reaction using both spallation and monoenergetic accelerator-produced neutrons. A scheme comprised of 81 levels and 157 transitions was constructed concentrating on levels below 5.6 MeV in excitation energy. Spins have been determined by considering data from all experimental studies performed for {sup 90}Zr. Lifetimes have been deduced using the Doppler-shift attenuation method for many of the states and transition rates have been obtained. A spherical shell-model interpretation in terms of particle-hole excitations assuming a {sup 88}Sr closed core is given. In some cases, enhancements in B(M1) and B(E2) values aremore » observed that cannot be explained by assuming simple particle-hole excitations. Shell-model calculations using an extended f pg-shell model space reproduce the spectrum of excited states very well, and the gross features of the B(M1) and B(E2) transition rates. Transition rates for individual levels show discrepancies between calculations and experimental values.« less

  7. Determining the mass attenuation coefficient, effective atomic number, and electron density of raw wood and binderless particleboards of Rhizophora spp. by using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh

    Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.

  8. Role of small-norm components in extended random-phase approximation

    NASA Astrophysics Data System (ADS)

    Tohyama, Mitsuru

    2017-09-01

    The role of the small-norm amplitudes in extended random-phase approximation (RPA) theories such as the particle-particle and hole-hole components of one-body amplitudes and the two-body amplitudes other than two-particle/two-hole components are investigated for the one-dimensional Hubbard model using an extended RPA derived from the time-dependent density matrix theory. It is found that these amplitudes cannot be neglected in strongly interacting regions where the effects of ground-state correlations are significant.

  9. Development of 1D Particle-in-Cell Code and Simulation of Plasma-Wall Interactions

    NASA Astrophysics Data System (ADS)

    Rose, Laura P.

    This thesis discusses the development of a 1D particle-in-cell (PIC) code and the analysis of plasma-wall interactions. The 1D code (Plasma and Wall Simulation -- PAWS) is a kinetic simulation of plasma done by treating both electrons and ions as particles. The goal of this thesis is to study near wall plasma interaction to better understand the mechanism that occurs in this region. The main focus of this investigation is the effects that secondary electrons have on the sheath profile. The 1D code is modeled using the PIC method. Treating both the electrons and ions as macroparticles the field is solved on each node and weighted to each macro particle. A pre-ionized plasma was loaded into the domain and the velocities of particles were sampled from the Maxwellian distribution. An important part of this code is the boundary conditions at the wall. If a particle hits the wall a secondary electron may be produced based on the incident energy. To study the sheath profile the simulations were run for various cases. Varying background neutral gas densities were run with the 2D code and compared to experimental values. Different wall materials were simulated to show their effects of SEE. In addition different SEE yields were run, including one study with very high SEE yields to show the presence of a space charge limited sheath. Wall roughness was also studied with the 1D code using random angles of incidence. In addition to the 1D code, an external 2D code was also used to investigate wall roughness without secondary electrons. The roughness profiles where created upon investigation of wall roughness inside Hall Thrusters based off of studies done on lifetime erosion of the inner and outer walls of these devices. The 2D code, Starfish[33], is a general 2D axisymmetric/Cartesian code for modeling a wide a range of plasma and rarefied gas problems. These results show that higher SEE yield produces a smaller sheath profile and that wall roughness produces a lower SEE yield. Modeling near wall interactions is not a simple or perfected task. Due to the lack of a second dimension and a sputtering model it is not possible with this study to show the positive effects wall roughness could have on Hall thruster performance since roughness occurs from the negative affect of sputtering.

  10. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    NASA Astrophysics Data System (ADS)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  11. Benchmarking kinetic calculations of resistive wall mode stability

    NASA Astrophysics Data System (ADS)

    Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.

    2014-05-01

    Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].

  12. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  13. Verification of TEMPEST with neoclassical transport theory

    NASA Astrophysics Data System (ADS)

    Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M.; Hittinger, J.; Kerbel, G.; Nevins, W. M.; Rognlien, T.; Umansky, M.; Xu, X.

    2006-10-01

    TEMPEST is an edge gyro-kinetic continuum code developed to study boundary plasma transport over the region extending from the H-mode pedestal across the separatrix to the divertor plates. For benchmark purposes, we present results from the 4D (2r,2v) TEMPEST for both steady-state transport and time-dependent Geodesic Acoustic Modes (GAMs). We focus on an annular region inside the separatrix of a circular cross-section tokamak where analytical and numerical results are available. The parallel flow velocity and radial particle flux are obtained for different collisional regimes and compared with previous neoclassical results. The effect of radial electric field and the transition to steep edge gradients is emphasized. The dynamical response of GAMs is also shown and compared to recent theory.

  14. Efficient full wave code for the coupling of large multirow multijunction LH grills

    NASA Astrophysics Data System (ADS)

    Preinhaelter, Josef; Hillairet, Julien; Milanesio, Daniele; Maggiora, Riccardo; Urban, Jakub; Vahala, Linda; Vahala, George

    2017-11-01

    The full wave code OLGA, for determining the coupling of a single row lower hybrid launcher (waveguide grills) to the plasma, is extended to handle multirow multijunction active passive structures (like the C3 and C4 launchers on TORE SUPRA) by implementing the scattering matrix formalism. The extended code is still computationally fast because of the use of (i) 2D splines of the plasma surface admittance in the accessibility region of the k-space, (ii) high order Gaussian quadrature rules for the integration of the coupling elements and (iii) utilizing the symmetries of the coupling elements in the multiperiodic structures. The extended OLGA code is benchmarked against the ALOHA-1D, ALOHA-2D and TOPLHA codes for the coupling of the C3 and C4 TORE SUPRA launchers for several plasma configurations derived from reflectometry and interferometery. Unlike nearly all codes (except the ALOHA-1D code), OLGA does not require large computational resources and can be used for everyday usage in planning experimental runs. In particular, it is shown that the OLGA code correctly handles the coupling of the C3 and C4 launchers over a very wide range of plasma densities in front of the grill.

  15. IMPETUS: consistent SPH calculations of 3D spherical Bondi accretion on to a black hole

    NASA Astrophysics Data System (ADS)

    Ramírez-Velasquez, J. M.; Sigalotti, L. Di G.; Gabbasov, R.; Cruz, F.; Klapp, J.

    2018-07-01

    We present three-dimensional calculations of spherically symmetric Bondi accretion on to a stationary supermassive black hole of mass 108 M⊙ within a radial range of 0.02-10 pc, using a modified version of the smoothed particle hydrodynamics GADGET-2 code, which ensures approximate first-order consistency (i.e. second-order accuracy) for the particle approximation. First-order consistency is restored by allowing the number of neighbours, nneigh, and the smoothing length, h, to vary with the total number of particles, N, such that the asymptotic limits nneigh → ∞ and h → 0 hold as N → ∞. The ability of the method to reproduce the isothermal (γ = 1) and adiabatic (γ = 5/3) Bondi accretion is investigated with increased spatial resolution. In particular, for the isothermal models, the numerical radial profiles closely match the Bondi solution, except near the accretor, where the density and radial velocity are slightly underestimated. However, as nneigh is increased and h is decreased, the calculations approach first-order consistency and the deviations from the Bondi solution decrease. The density and radial velocity profiles for the adiabatic models are qualitatively similar to those for the isothermal Bondi accretion. Steady-state Bondi accretion is reproduced by the highly resolved consistent models with a percent relative error of ≲ 1 per cent for γ = 1 and ˜9 per cent for γ = 5/3, with the adiabatic accretion taking longer than the isothermal case to reach steady flow. The performance of the method is assessed by comparing the results with those obtained using the standard GADGET-2 and GIZMO codes.

  16. A new numerically stable implementation of the T-matrix method for electromagnetic scattering by spheroidal particles

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2013-07-01

    We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.

  17. Extension of the HAL QCD approach to inelastic and multi-particle scatterings in lattice QCD

    NASA Astrophysics Data System (ADS)

    Aoki, S.

    We extend the HAL QCD approach, with which potentials between two hadrons can be obtained in QCD at energy below inelastic thresholds, to inelastic and multi-particle scatterings. We first derive asymptotic behaviors of the Nambu-Bethe-Salpeter (NBS) wave function at large space separations for systems with more than 2 particles, in terms of the one-shell $T$-matrix consrainted by the unitarity of quantum field theories. We show that its asymptotic behavior contains phase shifts and mixing angles of $n$ particle scatterings. This property is one of the essential ingredients of the HAL QCD scheme to define "potential" from the NBS wave function in quantum field theories such as QCD. We next construct energy independent but non-local potentials above inelastic thresholds, in terms of these NBS wave functions. We demonstrate an existence of energy-independent coupled channel potentials with a non-relativistic approximation, where momenta of all particles are small compared with their own masses. Combining these two results, we can employ the HAL QCD approach also to investigate inelastic and multi-particle scatterings.

  18. An Extended Proof-Carrying Code Framework for Security Enforcement

    NASA Astrophysics Data System (ADS)

    Pirzadeh, Heidar; Dubé, Danny; Hamou-Lhadj, Abdelwahab

    The rapid growth of the Internet has resulted in increased attention to security to protect users from being victims of security threats. In this paper, we focus on security mechanisms that are based on Proof-Carrying Code (PCC) techniques. In a PCC system, a code producer sends a code along with its safety proof to the consumer. The consumer executes the code only if the proof is valid. Although PCC has been shown to be a useful security framework, it suffers from the sheer size of typical proofs -proofs of even small programs can be considerably large. In this paper, we propose an extended PCC framework (EPCC) in which, instead of the proof, a proof generator for the program in question is transmitted. This framework enables the execution of the proof generator and the recovery of the proof on the consumer's side in a secure manner using a newly created virtual machine called the VEP (Virtual Machine for Extended PCC).

  19. Creation of fully vectorized FORTRAN code for integrating the movement of dust grains in interplanetary environments

    NASA Technical Reports Server (NTRS)

    Colquitt, Walter

    1989-01-01

    The main objective is to improve the performance of a specific FORTRAN computer code from the Planetary Sciences Division of NASA/Johnson Space Center when used on a modern vectorizing supercomputer. The code is used to calculate orbits of dust grains that separate from comets and asteroids. This code accounts for influences of the sun and 8 planets (neglecting Pluto), solar wind, and solar light pressure including Poynting-Robertson drag. Calculations allow one to study the motion of these particles as they are influenced by the Earth or one of the other planets. Some of these particles become trapped just beyond the Earth for long periods of time. These integer period resonances vary from 3 orbits of the Earth and 2 orbits of the particles to as high as 14 to 13.

  20. Simulation of halo particles with Simpsons

    NASA Astrophysics Data System (ADS)

    Machida, Shinji

    2003-12-01

    Recent code improvements and some simulation results of halo particles with Simpsons will be presented. We tried to identify resonance behavior of halo particles by looking at tune evolution of individual macro particle.

  1. DCU@TRECMed 2012: Using Ad-Hoc Baselines for Domain-Specific Retrieval

    DTIC Science & Technology

    2012-11-01

    description to extend the query, for example: Patients with complicated GERD who receive endoscopy will be extended with Gastroesophageal reflux disease ... Diseases and Related Health Problems, version 9) for the patient’s admission or discharge status [1, 5]; treating negation (e.g. negative test results or...codes were mapped to a description of the code, usually a short phrase/sentence. For instance, the ICD9 code 253.5 corresponds to the disease Diabetes

  2. Parallel O(N) Stokes’ solver towards scalable Brownian dynamics of hydrodynamically interacting objects in general geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xujun; Li, Jiyuan; Jiang, Xikai

    An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less

  3. Pseudorapidity dependence of the anisotropic flow of charged particles in Pb–Pb collisions at s NN = 2.76  TeV

    DOE PAGES

    Adam, J.; Adamová, D.; Aggarwal, M. M.; ...

    2016-07-11

    We present measurements of the elliptic (v 2 ), triangular (v 3 ) and quadrangular (v 4 ) anisotropic azimuthal flow over a wide range of pseudorapidities (-3.5 < η < 5). The measurements are performed with Pb–Pb collisions at √s NN =2.76 TeV using the ALICE detector at the Large Hadron Collider (LHC). The flow harmonics are obtained using two- and four-particle correlations from nine different centrality intervals covering central to peripheral collisions. We find that the shape of v n(η) is largely independent of centrality for the flow harmonics n=2–4, however the higher harmonics fall off more steeplymore » with increasing |η|. We assess the validity of extended longitudinal scaling of v₂ by comparing to lower energy measurements, and find that the higher harmonic flow coefficients are proportional to the charged particle densities at larger pseudorapidities. Finally, we compare our measurements to both hydrodynamical and transport models, and find they both have challenges when it comes to describing our data.« less

  4. Parallel O(N) Stokes’ solver towards scalable Brownian dynamics of hydrodynamically interacting objects in general geometries

    DOE PAGES

    Zhao, Xujun; Li, Jiyuan; Jiang, Xikai; ...

    2017-06-29

    An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less

  5. 1, 6-diisocyanatohexane-extended poly (1, 4-butylene succinate / hydroxyl apatite nano particle scaffolds: Potential materials for bone regeneration applications

    NASA Astrophysics Data System (ADS)

    Kaur, Kulwinder; Singh, K. J.; Anand, Vikas; Bhatia, Gaurav; Nim, Lovedeep; Kaur, Manpreet; Arora, Daljit Singh

    2017-05-01

    Bioresorbable and bioactive scaffolds are promising materials for various biomedical applications including bone regeneration and drug delievrery. Authors present bioactive scaffolds prepared from 1, 6-diisocyanatohexane-extended poly (1, 4-butylene succinate) (PBSu-DCH) with different amount of hydroxyl apatite nanoparticles (nHAp) by solvent casting and particulate leaching techniques. Different weight ratios of nHAp (i.e. 0, 5 and 10 wt %) with fixed weight ratio (i.e. 10 wt %) of PBSu-DCH polymer have been prepared. Scaffolds have been assessed for their morphology, bioactivity, degradation, drug release and biological properties including cytotoxicity, cell attachment using MG-63 cell line and antimicrobial activity. Effectual drug release has been measured by incorporating gentamycin as an antibiotic in the scaffolds. The study is aimed at developing new biodegradable scaffolds to be used in skull, jaw and tooth socket for preserving bone mass.

  6. Charged particle tracking through electrostatic wire meshes using the finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk

    Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed.more » The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.« less

  7. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  8. Update and evaluation of decay data for spent nuclear fuel analyses

    NASA Astrophysics Data System (ADS)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  9. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  10. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    NASA Astrophysics Data System (ADS)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  11. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  12. Special features of isomeric ratios in nuclear reactions induced by various projectile particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danagulyan, A. S.; Hovhannisyan, G. H., E-mail: hov-gohar@ysu.am; Bakhshiyan, T. M.

    2016-05-15

    Calculations for (p, n) and (α, p3n) reactions were performed with the aid of the TALYS-1.4 code. Reactions in which the mass numbers of target and product nuclei were identical were examined in the range of A = 44–124. Excitation functions were obtained for product nuclei in ground and isomeric states, and isomeric ratios were calculated. The calculated data reflect well the dependence of the isomeric ratios on the projectile type. A comparison of the calculated and experimental data reveals, that, for some nuclei in a high-spin state, the calculated data fall greatly short of their experimental counterparts. These discrepanciesmore » may be due to the presence of high-spin yrast states and rotational bands in these nuclei. Calculations involving various level-density models included in the TALYS-1.4 code with allowance for the enhancement of collective effects do not remove the discrepancies in the majority of cases.« less

  13. The study of correlation among different scattering parameters in an aggregate dust model

    NASA Astrophysics Data System (ADS)

    Mazarbhuiya, A. M.; Das, H. S.

    2017-09-01

    We study the light scattering properties of aggregate particles in a wide range of complex refractive indices (m = n + i k, where 1.4 ≤ n ≤ 2.0, 0.001 ≤ k ≤1.0) and wavelengths (0.45 ≤ λ≤1.25 μ m) to investigate the correlation among different parameters e.g., the positive polarization maximum (P_{max}), the amplitude of the negative polarization (P_{min}), geometric albedo (A), (n,k) and λ. Numerical computations are performed by the Superposition T-matrix code with Ballistic Cluster-Cluster Aggregate (BCCA) particles of 128 monomers and Ballistic Aggregates (BA) particles of 512 monomers, where monomer's radius of aggregates is considered to be 0.1 μm. At a fixed value of k, P_{max} and n are correlated via a quadratic regression equation and this nature is observed at all wavelengths. Further, P_{max} and k are found to be related via a polynomial regression equation when n is taken to be fixed. The degree of the equation depends on the wavelength, higher the wavelength lower is the degree. We find that A and P_{max} are correlated via a cubic regression at λ= 0.45 μ m whereas this correlation is quadratic at higher wavelengths. We notice that |P_{min}| increases with the decrease of P_{max} and a strong linear correlation between them is observed when n is fixed at some value and k is changed from higher to lower value. Further, at a fix value of k, P_{min} and P_{max} can be fitted well via a quartic regression equation when n is changed from higher to lower value. We also find that P_{max} increases with λ and they are correlated via a quartic regression.

  14. IMPLEMENTATION OF SINK PARTICLES IN THE ATHENA CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong Hao; Ostriker, Eve C., E-mail: hgong@astro.umd.edu, E-mail: eco@astro.princeton.edu

    2013-01-15

    We describe the implementation and tests of sink particle algorithms in the Eulerian grid-based code Athena. The introduction of sink particles enables the long-term evolution of systems in which localized collapse occurs, and it is impractical (or unnecessary) to resolve the accretion shocks at the centers of collapsing regions. We discuss the similarities and differences of our methods compared to other implementations of sink particles. Our criteria for sink creation are motivated by the properties of the Larson-Penston collapse solution. We use standard particle-mesh methods to compute particle and gas gravity together. Accretion of mass and momenta onto sinks ismore » computed using fluxes returned by the Riemann solver. A series of tests based on previous analytic and numerical collapse solutions is used to validate our method and implementation. We demonstrate use of our code for applications with a simulation of planar converging supersonic turbulent flow, in which multiple cores form and collapse to create sinks; these sinks continue to interact and accrete from their surroundings over several Myr.« less

  15. PARMELA_B: a new version of PARMELA with coherent synchrotron radiation effects and a finite difference space charge routine

    NASA Astrophysics Data System (ADS)

    Koltenbah, Benjamin E. C.; Parazzoli, Claudio G.; Greegor, Robert B.; Dowell, David H.

    2002-07-01

    Recent interest in advanced laser light sources has stimulated development of accelerator systems of intermediate beam energy, 100-200 MeV, and high charge, 1-10 nC, for high power FEL applications and high energy, 1-2 GeV, high charge, SASE-FEL applications. The current generation of beam transport codes which were developed for high-energy, low-charge beams with low self-fields are inadequate to address this energy and charge regime, and better computational tools are required to accurately calculate self-fields. To that end, we have developed a new version of PARMELA, named PARMELA_B and written in Fortran 95, which includes a coherent synchrotron radiation (CSR) routine and an improved, generalized space charge (SC) routine. An electron bunch is simulated by a collection of macro-particles, which traverses a series of beam line elements. At each time step through the calculation, the momentum of each particle is updated due to the presence of external and self-fields. The self-fields are due to CSR and SC. For the CSR calculations, the macro-particles are further combined into macro-particle-bins that follow the central trajectory of the bend. The energy change through the time step is calculated from expressions derived from the Liénard-Wiechart formulae, and from this energy change the particle's momentum is updated. For the SC calculations, we maintain the same rest-frame-electrostatic approach of the original PARMELA; however, we employ a finite difference Poisson equation solver instead of the symmetrical ring algorithm of the original code. In this way, we relax the symmetry assumptions in the original code. This method is based upon standard numerical procedures and conserves momentum to first order. The SC computational grid is adaptive and conforms to the size of the pulse as it evolves through the calculation. We provide descriptions of these two algorithms, validation comparisons with other CSR and SC methods, and a limited comparison with experimental results.

  16. Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor

    DOE PAGES

    Hu, Po; Wilson, Paul

    2014-01-01

    The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in themore » code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.« less

  17. Combining electromagnetic gyro-kinetic particle-in-cell simulations with collisions

    NASA Astrophysics Data System (ADS)

    Slaby, Christoph; Kleiber, Ralf; Könies, Axel

    2017-09-01

    It has been an open question whether for electromagnetic gyro-kinetic particle-in-cell (PIC) simulations pitch-angle collisions and the recently introduced pullback transformation scheme (Mishchenko et al., 2014; Kleiber et al., 2016) are consistent. This question is positively answered by comparing the PIC code EUTERPE with an approach based on an expansion of the perturbed distribution function in eigenfunctions of the pitch-angle collision operator (Legendre polynomials) to solve the electromagnetic drift-kinetic equation with collisions in slab geometry. It is shown how both approaches yield the same results for the frequency and damping rate of a kinetic Alfvén wave and how the perturbed distribution function is substantially changed by the presence of pitch-angle collisions.

  18. Shaped nanocrystal particles and methods for making the same

    DOEpatents

    Alivisatos, A Paul [Oakland, CA; Scher, Erik C [Menlo Park, CA; Manna, Liberato [Berkeley, CA

    2011-11-22

    Shaped nanocrystal particles and methods for making shaped nanocrystal particles are disclosed. One embodiment includes a method for forming a branched, nanocrystal particle. It includes (a) forming a core having a first crystal structure in a solution, (b) forming a first arm extending from the core having a second crystal structure in the solution, and (c) forming a second arm extending from the core having the second crystal structure in the solution.

  19. Shaped nanocrystal particles and methods for making the same

    DOEpatents

    Alivisatos, A. Paul; Scher, Erik C; Manna, Liberato

    2013-12-17

    Shaped nanocrystal particles and methods for making shaped nanocrystal particles are disclosed. One embodiment includes a method for forming a branched, nanocrystal particle. It includes (a) forming a core having a first crystal structure in a solution, (b) forming a first arm extending from the core having a second crystal structure in the solution, and (c) forming a second arm extending from the core having the second crystal structure in the solution.

  20. Shaped nanocrystal particles and methods for working the same

    DOEpatents

    Alivisatos, A. Paul; Sher, Eric C.; Manna, Liberato

    2007-12-25

    Shaped nanocrystal particles and methods for making shaped nanocrystal particles are disclosed. One embodiment includes a method for forming a branched, nanocrystal particle. It includes (a) forming a core having a first crystal structure in a solution, (b) forming a first arm extending from the core having a second crystal structure in the solution, and (c) forming a second arm extending from the core having the second crystal structure in the solution.

  1. Shaped Nonocrystal Particles And Methods For Making The Same

    DOEpatents

    Alivisatos, A. Paul; Scher, Erik C.; Manna, Liberato

    2005-02-15

    Shaped nanocrystal particles and methods for making shaped nanocrystal particles are disclosed. One embodiment includes a method for forming a branched, nanocrystal particle. It includes (a) forming a core having a first crystal structure in a solution, (b) forming a first arm extending from the core having a second crystal structure in the solution, and (c) forming a second arm extending from the core having the second crystal structure in the solution.

  2. High-Speed Particle-in-Cell Simulation Parallelized with Graphic Processing Units for Low Temperature Plasmas for Material Processing

    NASA Astrophysics Data System (ADS)

    Hur, Min Young; Verboncoeur, John; Lee, Hae June

    2014-10-01

    Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.

  3. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  4. Damping Rate Measurements of Medium n Alfv'en Eigenmodes in JET

    NASA Astrophysics Data System (ADS)

    Klein, Alexander; Testa, Duccio; Snipes, Joseph; Fasoli, Ambrogio; Carfantan, Hervé

    2007-11-01

    Alfv'en Eigenmodes (AE's) with mode numbers 5 < n < 20 are expected to be unstable in burning tokamaks and may lead to loss of fast particle confinement. The active MHD spectroscopy program at JET has already provided a wealth of information about low n (n <= 2) AE's in the past decade, but a recently installed array of four antennas is capable of driving higher mode numbered (n < 100, 30 < f < 350 kHz) perturbations. In the latest JET campaign, the damping rates for several types of AE's were measured parasitically in a wide range of tokamak scenarios. We review the active MHD diagnostic and present the first measurements of medium-n AE stability on JET, then describe future plans for the active MHD spectroscopy project. The data analysis involves a novel method for resolving multiple AE's that exist at identical frequencies, which uses techniques based on the SparSpec code.

  5. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    NASA Astrophysics Data System (ADS)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  6. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  7. Proton radiography and fluoroscopy of lung tumors: A Monte Carlo study using patient-specific 4DCT phantoms

    PubMed Central

    Han, Bin; Xu, X. George; Chen, George T. Y.

    2011-01-01

    Purpose: Monte Carlo methods are used to simulate and optimize a time-resolved proton range telescope (TRRT) in localization of intrafractional and interfractional motions of lung tumor and in quantification of proton range variations. Methods: The Monte Carlo N-Particle eXtended (MCNPX) code with a particle tracking feature was employed to evaluate the TRRT performance, especially in visualizing and quantifying proton range variations during respiration. Protons of 230 MeV were tracked one by one as they pass through position detectors, patient 4DCT phantom, and finally scintillator detectors that measured residual ranges. The energy response of the scintillator telescope was investigated. Mass density and elemental composition of tissues were defined for 4DCT data. Results: Proton water equivalent length (WEL) was deduced by a reconstruction algorithm that incorporates linear proton track and lateral spatial discrimination to improve the image quality. 4DCT data for three patients were used to visualize and measure tumor motion and WEL variations. The tumor trajectories extracted from the WEL map were found to be within ∼1 mm agreement with direct 4DCT measurement. Quantitative WEL variation studies showed that the proton radiograph is a good representation of WEL changes from entrance to distal of the target. Conclusions:MCNPX simulation results showed that TRRT can accurately track the motion of the tumor and detect the WEL variations. Image quality was optimized by choosing proton energy, testing parameters of image reconstruction algorithm, and comparing to ground truth 4DCT. The future study will demonstrate the feasibility of using the time resolved proton radiography as an imaging tool for proton treatments of lung tumors. PMID:21626923

  8. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  9. Radioactive ion beams produced by neutron-induced fission at ISOLDE

    NASA Astrophysics Data System (ADS)

    Catherall, R.; Lettry, J.; Gilardoni, S.; Köster, U.; Isolde Collaboration

    2003-05-01

    The production rates of neutron-rich fission products for the next-generation radioactive beam facility EURISOL [EU-RTD Project EURISOL (HPRI-CT-1999-50001)] are mainly limited by the maximum amount of power deposited by protons in the target. An alternative approach is to use neutron beams to induce fission in actinide targets. This has the advantage of reducing: the energy deposited by the proton beam in the target; contamination from neutron-deficient isobars that would be produced by spallation; and mechanical stress on the target. At ISOLDE CERN [E. Kugler, Hyperfine Interact. 129 (2000) 23], tests have been made on standard ISOLDE actinide targets using fast-neutron bunches produced by bombarding thick, high- Z metal converters with 1 and 1.4 GeV proton pulses. This paper reviews the first applications of converters used at ISOLDE. It highlights the different geometries and the techniques used to compare fission yields produced by the proton beam directly on the target with neutron-induced fission. Results from the six targets already tested, namely UC 2/graphite and ThO 2 targets with tungsten and tantalum converters, are presented. To gain further knowledge for the design of a dedicated target as required by the TARGISOL project [EU-RTD Project TARGISOL (HPRI-CT-2001-50033)], the results are compared to simulations, using the MARS [N.V. Mokhov, S.I. Striganov, A. Van Ginneken, S.G. Mashnik, A.J. Sierk, J. Ranft, MARS code developments, in: 4th Workshop on Simulating Accelerator Radiation Environments, SARE-4, Knoxville, USA, 14-15.9.1998, FERMILAB-PUB-98-379, nucl-th/9812038; N.V. Mokhov, The Mars Code System User's Guide, Fermilab-FN-628, 1995; N.V. Mokhov, MARS Code Developments, Benchmarking and Applications, Fermilab-Conf-00-066, 2000; O.E. Krivosheev, N.V. Mokhov, A New MARS and its Applications, Fermilab-Conf-98/43, 1998] code interfaced with MCNP [J.S. Hendrics, MCNP4C LANL Memo X-5; JSH-2000-3; J.F. Briemesteir (Ed.), MCNP - A General Montecarlo N-Particle Transport Code, Version 4C, LA-13709-M] libraries, of the neutron flux from the converters interacting with the actinide targets.

  10. Radioactive ion beams produced by neutron-induced fission at ISOLDE

    NASA Astrophysics Data System (ADS)

    Isolde Collaboration; Catherall, R.; Lettry, J.; Gilardoni, S.; Köster, U.

    2003-05-01

    The production rates of neutron-rich fission products for the next-generation radioactive beam facility EURISOL [EU-RTD Project EURISOL (HPRI-CT-1999-50001)] are mainly limited by the maximum amount of power deposited by protons in the target. An alternative approach is to use neutron beams to induce fission in actinide targets. This has the advantage of reducing: the energy deposited by the proton beam in the target; contamination from neutron-deficient isobars that would be produced by spallation; and mechanical stress on the target. At ISOLDE CERN [E. Kugler, Hyperfine Interact. 129 (2000) 23], tests have been made on standard ISOLDE actinide targets using fast-neutron bunches produced by bombarding thick, high-/Z metal converters with 1 and 1.4 GeV proton pulses. This paper reviews the first applications of converters used at ISOLDE. It highlights the different geometries and the techniques used to compare fission yields produced by the proton beam directly on the target with neutron-induced fission. Results from the six targets already tested, namely UC2/graphite and ThO2 targets with tungsten and tantalum converters, are presented. To gain further knowledge for the design of a dedicated target as required by the TARGISOL project [EU-RTD Project TARGISOL (HPRI-CT-2001-50033)], the results are compared to simulations, using the MARS [N.V. Mokhov, S.I. Striganov, A. Van Ginneken, S.G. Mashnik, A.J. Sierk, J. Ranft, MARS code developments, in: 4th Workshop on Simulating Accelerator Radiation Environments, SARE-4, Knoxville, USA, 14-15.9.1998, FERMILAB-PUB-98-379, nucl-th/9812038; N.V. Mokhov, The Mars Code System User's Guide, Fermilab-FN-628, 1995; N.V. Mokhov, MARS Code Developments, Benchmarking and Applications, Fermilab-Conf-00-066, 2000; O.E. Krivosheev, N.V. Mokhov, A New MARS and its Applications, Fermilab-Conf-98/43, 1998] code interfaced with MCNP [J.S. Hendrics, MCNP4C LANL Memo X-5; JSH-2000-3; J.F. Briemesteir (Ed.), MCNP - A General Montecarlo N-Particle Transport Code, Version 4C, LA-13709-M] libraries, of the neutron flux from the converters interacting with the actinide targets.

  11. Moving Towards a State of the Art Charge-Exchange Reaction Code

    NASA Astrophysics Data System (ADS)

    Poxon-Pearson, Terri; Nunes, Filomena; Potel, Gregory

    2017-09-01

    Charge-exchange reactions have a wide range of applications, including late stellar evolution, constraining the matrix elements for neutrinoless double β-decay, and exploring symmetry energy and other aspects of exotic nuclear matter. Still, much of the reaction theory needed to describe these transitions is underdeveloped and relies on assumptions and simplifications that are often extended outside of their region of validity. In this work, we have begun to move towards a state of the art charge-exchange reaction code. As a first step, we focus on Fermi transitions using a Lane potential in a few body, Distorted Wave Born Approximation (DWBA) framework. We have focused on maintaining a modular structure for the code so we can later incorporate complications such as nonlocality, breakup, and microscopic inputs. Results using this new charge-exchange code will be shown compared to the analysis in for the case of 48Ca(p,n)48Sc. This work was supported in part by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through the U.S. DOE Cooperative Agreement No. DE- FG52-08NA2855.

  12. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering.

    PubMed

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-10-24

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.

  13. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering

    PubMed Central

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-01-01

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064

  14. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.

  15. Review of heavy charged particle transport in MCNP6.2

    NASA Astrophysics Data System (ADS)

    Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.

    2018-04-01

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.

  16. Review of Heavy Charged Particle Transport in MCNP6.2

    DOE PAGES

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...

    2018-01-05

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  17. Using a Euclid distance discriminant method to find protein coding genes in the yeast genome.

    PubMed

    Zhang, Chun-Ting; Wang, Ju; Zhang, Ren

    2002-02-01

    The Euclid distance discriminant method is used to find protein coding genes in the yeast genome, based on the single nucleotide frequencies at three codon positions in the ORFs. The method is extremely simple and may be extended to find genes in prokaryotic genomes or eukaryotic genomes with less introns. Six-fold cross-validation tests have demonstrated that the accuracy of the algorithm is better than 93%. Based on this, it is found that the total number of protein coding genes in the yeast genome is less than or equal to 5579 only, about 3.8-7.0% less than 5800-6000, which is currently widely accepted. The base compositions at three codon positions are analyzed in details using a graphic method. The result shows that the preference codons adopted by yeast genes are of the RGW type, where R, G and W indicate the bases of purine, non-G and A/T, whereas the 'codons' in the intergenic sequences are of the form NNN, where N denotes any base. This fact constitutes the basis of the algorithm to distinguish between coding and non-coding ORFs in the yeast genome. The names of putative non-coding ORFs are listed here in detail.

  18. A Study of the Effects of Detergents on Typical Bilge Waters and Correlation of Oil Particle Sizes

    DTIC Science & Technology

    1975-07-01

    Code 15 Supplementary Notet 1 T v/atei6. Abstract il removal systems for treating bilge r are drastically affected by the condi- tion...to develop effi cient oil removal systems for treating discharged bilgewaters. The oil -removi ng efficiency of any oil-water separator is grossly...difficulty i n removal from the bilge water during or prior to discharge. However, all bil~es are usually collection point s for other var ious a ~ 1

  19. Full-wave simulations of ICRF heating regimes in toroidal plasma with non-Maxwellian distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertelli, N.; Valeo, E. J.; Green, D. L.

    At the power levels required for significant heating and current drive in magnetically-confined toroidal plasma, modification of the particle distribution function from a Maxwellian shape is likely (Stix 1975 Nucl. Fusion 15 737), with consequent changes in wave propagation and in the location and amount of absorption. In order to study these effects computationally, both the finite-Larmor-radius and the high-harmonic fast wave (HHFW), versions of the full-wave, hot-plasma toroidal simulation code TORIC (Brambilla 1999 Plasma Phys. Control. Fusion 41 1 and Brambilla 2002 Plasma Phys. Control. Fusion 44 2423), have been extended to allow the prescription of arbitrary velocity distributionsmore » of the form f(v(parallel to), v(perpendicular to) , psi, theta). For hydrogen (H) minority heating of a deuterium (D) plasma with anisotropic Maxwellian H distributions, the fractional H absorption varies significantly with changes in parallel temperature but is essentially independent of perpendicular temperature. On the other hand, for HHFW regime with anisotropic Maxwellian fast ion distribution, the fractional beam ion absorption varies mainly with changes in the perpendicular temperature. The evaluation of the wave-field and power absorption, through the full wave solver, with the ion distribution function provided by either a Monte-Carlo particle and Fokker-Planck codes is also examined for Alcator C-Mod and NSTX plasmas. Non-Maxwellian effects generally tend to increase the absorption with respect to the equivalent Maxwellian distribution.« less

  20. Full-wave simulations of ICRF heating regimes in toroidal plasmas with non-Maxwellian distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertelli, N.; Valeo, E.J.; Green, D.L.

    At the power levels required for significant heating and current drive in magnetically-confined toroidal plasma, modification of the particle distribution function from a Maxwellian shape is likely [T. H. Stix, Nucl. Fusion, 15 737 (1975)], with consequent changes in wave propagation and in the location and amount of absorption. In order to study these effects computationally, both the finite-Larmor-radius and the high-harmonic fast wave (HHFW), versions of the full-wave, hot-plasma toroidal simulation code TORIC [M. Brambilla, Plasma Phys. Control. Fusion 41, 1 (1999) and M. Brambilla, Plasma Phys. Control. Fusion 44, 2423 (2002)], have been extended to allow the prescriptionmore » of arbitrary velocity distributions of the form f(v||, v_perp, psi , theta). For hydrogen (H) minority heating of a deuterium (D) plasma with anisotropic Maxwellian H distributions, the fractional H absorption varies significantly with changes in parallel temperature but is essentially independent of perpendicular temperature. On the other hand, for HHFW regime with anisotropic Maxwellian fast ion distribution, the fractional beam ion absorption varies mainly with changes in the perpendicular temperature. The evaluation of the wave-field and power absorption, through the full wave solver, with the ion distribution function provided by either aMonte-Carlo particle and Fokker-Planck codes is also examined for Alcator C-Mod and NSTX plasmas. Non-Maxwellian effects generally tends to increase the absorption with respect to the equivalent Maxwellian distribution.« less

Top