Sample records for particle code xgc0

  1. Extension of the XGC code for global gyrokinetic simulations in stellarator geometry

    NASA Astrophysics Data System (ADS)

    Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock

    2017-10-01

    In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.

  2. XGC developments for a more efficient XGC-GENE code coupling

    NASA Astrophysics Data System (ADS)

    Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs

    2017-10-01

    In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.

  3. A fast low-to-high confinement mode bifurcation dynamics in the boundary-plasma gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Ku, S.; Chang, C. S.; Hager, R.; Churchill, R. M.; Tynan, G. R.; Cziegler, I.; Greenwald, M.; Hughes, J.; Parker, S. E.; Adams, M. F.; D'Azevedo, E.; Worley, P.

    2018-05-01

    A fast edge turbulence suppression event has been simulated in the electrostatic version of the gyrokinetic particle-in-cell code XGC1 in a realistic diverted tokamak edge geometry under neutral particle recycling. The results show that the sequence of turbulent Reynolds stress followed by neoclassical ion orbit-loss driven together conspire to form the sustaining radial electric field shear and to quench turbulent transport just inside the last closed magnetic flux surface. The main suppression action is located in a thin radial layer around ψN≃0.96 -0.98 , where ψN is the normalized poloidal flux, with the time scale ˜0.1 ms.

  4. Effect of anomalous transport on kinetic simulations of the H-mode pedestal

    NASA Astrophysics Data System (ADS)

    Bateman, G.; Pankin, A. Y.; Kritz, A. H.; Rafiq, T.; Park, G. Y.; Ku, S.; Chang, C. S.

    2009-11-01

    The MMM08 and MMM95 Multi-Mode transport models [1,2], are used to investigate the effect of anomalous transport in XGC0 gyrokinetic simulations [3] of tokamak H-mode pedestal growth. Transport models are implemented in XGC0 using the Framework for Modernization and Componentization of Fusion Modules (FMCFM). Anomalous transport is driven by steep temperature and density gradients and is suppressed by high values of flow shear in the pedestal. The radial electric field, used to calculate the flow shear rate, is computed self-consistently in the XGC0 code with the anomalous transport, Lagrangian charged particle dynamics and neutral particle effects. XGC0 simulations are used to provide insight into how thermal and particle transport, together with the sources of heat and charged particles, determine the shape and growth rate of the temperature and density profiles. [1] F.D. Halpern et al., Phys. Plasmas 15 (2008) 065033; J.Weiland et al., Nucl. Fusion 49 (2009) 965933; A.Kritz et al., EPS (2009) [2] G. Bateman, et al, Phys. Plasmas 5 (1998) 1793 [3] C.S. Chang, S. Ku, H. Weitzner, Phys. Plasmas 11 (2004) 2649

  5. A fast low-to-high confinement mode bifurcation dynamics in the boundary-plasma gyrokinetic code XGC1

    DOE PAGES

    Ku, S.; Chang, C. S.; Hager, R.; ...

    2018-04-18

    Here, a fast edge turbulence suppression event has been simulated in the electrostatic version of the gyrokinetic particle-in-cell code XGC1 in a realistic diverted tokamak edge geometry under neutral particle recycling. The results show that the sequence of turbulent Reynolds stress followed by neoclassical ion orbit-loss driven together conspire to form the sustaining radial electric field shear and to quench turbulent transport just inside the last closed magnetic flux surface. As a result, the main suppression action is located in a thin radial layer around ψ N≃0.96–0.98, where ψ N is the normalized poloidal flux, with the time scale ~0.1more » ms.« less

  6. Coupled Kinetic-MHD Simulations of Divertor Heat Load with ELM Perturbations

    NASA Astrophysics Data System (ADS)

    Cummings, Julian; Chang, C. S.; Park, Gunyoung; Sugiyama, Linda; Pankin, Alexei; Klasky, Scott; Podhorszki, Norbert; Docan, Ciprian; Parashar, Manish

    2010-11-01

    The effect of Type-I ELM activity on divertor plate heat load is a key component of the DOE OFES Joint Research Target milestones for this year. In this talk, we present simulations of kinetic edge physics, ELM activity, and the associated divertor heat loads in which we couple the discrete guiding-center neoclassical transport code XGC0 with the nonlinear extended MHD code M3D using the End-to-end Framework for Fusion Integrated Simulations, or EFFIS. In these coupled simulations, the kinetic code and the MHD code run concurrently on the same massively parallel platform and periodic data exchanges are performed using a memory-to-memory coupling technology provided by EFFIS. The M3D code models the fast ELM event and sends frequent updates of the magnetic field perturbations and electrostatic potential to XGC0, which in turn tracks particle dynamics under the influence of these perturbations and collects divertor particle and energy flux statistics. We describe here how EFFIS technologies facilitate these coupled simulations and discuss results for DIII-D, NSTX and Alcator C-Mod tokamak discharges.

  7. Verification of long wavelength electromagnetic modes with a gyrokinetic-fluid hybrid model in the XGC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert; Lang, Jianying; Chang, C. S.

    As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons. Here, two representative long wavelength modes, shear Alfven waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries.

  8. Verification of long wavelength electromagnetic modes with a gyrokinetic-fluid hybrid model in the XGC code

    DOE PAGES

    Hager, Robert; Lang, Jianying; Chang, C. S.; ...

    2017-05-24

    As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons. Here, two representative long wavelength modes, shear Alfven waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries.

  9. Verification of long wavelength electromagnetic modes with a gyrokinetic-fluid hybrid model in the XGC code

    PubMed Central

    Lang, Jianying; Ku, S.; Chen, Y.; Parker, S. E.; Adams, M. F.

    2017-01-01

    As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons analogous to Chen and Parker [Phys. Plasmas 8, 441 (2001)]. Two representative long wavelength modes, shear Alfvén waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries. PMID:29104419

  10. Kinetic studies of divertor heat fluxes in Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Bateman, G.; Kritz, A. H.; Rafiq, T.; Park, G. Y.; Chang, C. S.; Brunner, D.; Hughes, J. W.; Labombard, B.; Terry, J.

    2010-11-01

    The kinetic XGC0 code [C.S. Chang et al, Phys. Plasmas 11 (2004) 2649] is used to model the H- mode pedestal and SOL regions in Alcator C-Mod discharges. The self-consistent simulations in this study include kinetic neoclassical physics and anomalous transport models along with the ExB flow shear effects. The heat fluxes on the divertor plates are computed and the fluxes to the outer plate are compared with experimental observations. The dynamics of the radial electric field near the separatrix and in the SOL region are computed with the XGC0 code, and the effect of the anomalous transport on the heat fluxes in the SOL region is investigated. In particular, the particle and thermal diffusivities obtained in the analysis mode are compared with predictions from the theory-based anomalous transport models such as MMM95 [G. Bateman et al, Phys. Plasmas 5 (1998) 1793] and DRIBM [T. Rafiq et al, to appear in Phys. Plasmas (2010)]. It is found that there is a notable pinch effect in the inner separatrix region. Possible physical mechanisms for the particle and thermal pinches are discussed.

  11. Implementation of non-axisymmetric mesh system in the gyrokinetic PIC code (XGC) for Stellarators

    NASA Astrophysics Data System (ADS)

    Moritaka, Toseo; Hager, Robert; Cole, Micheal; Chang, Choong-Seock; Lazerson, Samuel; Ku, Seung-Hoe; Ishiguro, Seiji

    2017-10-01

    Gyrokinetic simulation is a powerful tool to investigate turbulent and neoclassical transports based on the first-principles of plasma kinetics. The gyrokinetic PIC code XGC has been developed for integrated simulations that cover the entire region of Tokamaks. Complicated field line and boundary structures should be taken into account to demonstrate edge plasma dynamics under the influence of X-point and vessel components. XGC employs gyrokinetic Poisson solver on unstructured triangle mesh to deal with this difficulty. We introduce numerical schemes newly developed for XGC simulation in non-axisymmetric Stellarator geometry. Triangle meshes in each poloidal plane are defined by PEST poloidal angle in the VMEC equilibrium so that they have the same regular structure in the straight field line coordinate. Electric charge of marker particle is distributed to the triangles specified by the field-following projection to the neighbor poloidal planes. 3D spline interpolation in a cylindrical mesh is also used to obtain equilibrium magnetic field at the particle position. These schemes capture the anisotropic plasma dynamics and resulting potential structure with high accuracy. The triangle meshes can smoothly connect to unstructured meshes in the edge region. We will present the validation test in the core region of Large Helical Device and discuss about future challenges toward edge simulations.

  12. Study of no-man's land physics in the total-f gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung Hoe; Chang, C. S.; Lang, J.

    2014-10-01

    While the ``transport shortfall'' in the ``no-man's land'' has been observed often in delta-f codes, it has not yet been observed in the global total-f gyrokinetic particle code XGC1. Since understanding the interaction between the edge and core transport appears to be a critical element in the prediction for ITER performance, understanding the no-man's land issue is an important physics research topic. Simulation results using the Holland case will be presented and the physics causing the shortfall phenomenon will be discussed. Nonlinear nonlocal interaction of turbulence, secondary flows, and transport appears to be the key.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S.; Chang, C. S.; Hager, R.

    Here, a fast edge turbulence suppression event has been simulated in the electrostatic version of the gyrokinetic particle-in-cell code XGC1 in a realistic diverted tokamak edge geometry under neutral particle recycling. The results show that the sequence of turbulent Reynolds stress followed by neoclassical ion orbit-loss driven together conspire to form the sustaining radial electric field shear and to quench turbulent transport just inside the last closed magnetic flux surface. As a result, the main suppression action is located in a thin radial layer around ψ N≃0.96–0.98, where ψ N is the normalized poloidal flux, with the time scale ~0.1more » ms.« less

  14. Development of a fully implicit particle-in-cell scheme for gyrokinetic electromagnetic turbulence simulation in XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung-Hoe; Hager, R.; Chang, C. S.; Chacon, L.; Chen, G.; EPSI Team

    2016-10-01

    The cancelation problem has been a long-standing issue for long wavelengths modes in electromagnetic gyrokinetic PIC simulations in toroidal geometry. As an attempt of resolving this issue, we implemented a fully implicit time integration scheme in the full-f, gyrokinetic PIC code XGC1. The new scheme - based on the implicit Vlasov-Darwin PIC algorithm by G. Chen and L. Chacon - can potentially resolve cancelation problem. The time advance for the field and the particle equations is space-time-centered, with particle sub-cycling. The resulting system of equations is solved by a Picard iteration solver with fixed-point accelerator. The algorithm is implemented in the parallel velocity formalism instead of the canonical parallel momentum formalism. XGC1 specializes in simulating the tokamak edge plasma with magnetic separatrix geometry. A fully implicit scheme could be a way to accurate and efficient gyrokinetic simulations. We will test if this numerical scheme overcomes the cancelation problem, and reproduces the dispersion relation of Alfven waves and tearing modes in cylindrical geometry. Funded by US DOE FES and ASCR, and computing resources provided by OLCF through ALCC.

  15. Detailed study of spontaneous rotation generation in diverted H-mode plasma using the full-f gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Seo, Janghoon; Chang, C. S.; Ku, S.; Kwon, J. M.; Yoon, E. S.

    2013-10-01

    The Full-f gyrokinetic code XGC1 is used to study the details of toroidal momentum generation in H-mode plasma. Diverted DIII-D geometry is used, with Monte Carlo neutral particles that are recycled at the limiter wall. Nonlinear Coulomb collisions conserve particle, momentum, and energy. Gyrokinetic ions and adiabatic electrons are used in the present simulation to include the effects from ion gyrokinetic turbulence and neoclassical physics, under self-consistent radial electric field generation. Ion orbit loss physics is automatically included. Simulations show a strong co-Ip flow in the H-mode layer at outside midplane, similarly to the experimental observation from DIII-D and ASDEX-U. The co-Ip flow in the edge propagates inward into core. It is found that the strong co-Ip flow generation is mostly from neoclassical physics. On the other hand, the inward momentum transport is from turbulence physics, consistently with the theory of residual stress from symmetry breaking. Therefore, interaction between the neoclassical and turbulence physics is a key factor in the spontaneous momentum generation.

  16. Partnership for Edge Physics (EPSI), University of Texas Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, Robert; Carey, Varis; Michoski, Craig

    Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less

  17. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  18. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  19. Fully non-linear multi-species Fokker-Planck-Landau collisions for gyrokinetic particle-in-cell simulations of fusion plasma

    NASA Astrophysics Data System (ADS)

    Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.

    2015-11-01

    We describe the implementation, and application of a time-dependent, fully nonlinear multi-species Fokker-Planck-Landau collision operator based on the single-species work of Yoon and Chang [Phys. Plasmas 21, 032503 (2014)] in the full-function gyrokinetic particle-in-cell codes XGC1 [Ku et al., Nucl. Fusion 49, 115021 (2009)] and XGCa. XGC simulations include the pedestal and scrape-off layer, where significant deviations of the particle distribution function from a Maxwellian can occur. Thus, in order to describe collisional effects on neoclassical and turbulence physics accurately, the use of a non-linear collision operator is a necessity. Our collision operator is based on a finite volume method using the velocity-space distribution functions sampled from the marker particles. Since the same fine configuration space mesh is used for collisions and the Poisson solver, the workload due to collisions can be comparable to or larger than the workload due to particle motion. We demonstrate that computing time spent on collisions can be kept affordable by applying advanced parallelization strategies while conserving mass, momentum, and energy to reasonable accuracy. We also show results of production scale XGCa simulations in the H-mode pedestal and compare to conventional theory. Work supported by US DOE OFES and OASCR.

  20. Study of neoclassical effects on the pedestal structure in ELMy H-mode plasmas

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Bateman, G.; Kritz, A. H.; Rafiq, T.; Park, G. Y.; Ku, S.; Chang, C. S.; Snyder, P. B.

    2009-11-01

    The neoclassical effects on the H-mode pedestal structure are investigated in this study. First principles' kinetic simulations of the neoclassical pedestal dynamics are combined with the MHD stability conditions for triggering ELM crashes that limit the pedestal width and height in H-mode plasmas. The neoclassical kinetic XGC0 code [1] is used to produce systematic scans over plasma parameters including plasma current, elongation, and triangularity. As plasma profiles evolve, the MHD stability limits of these profiles are analyzed with the ideal MHD stability ELITE code [2]. The scalings of the pedestal width and height are presented as a function of the scanned plasma parameters. Simulations with the XGC0 code, which include coupled ion-electron dynamics, yield predictions for both ion and electron pedestal profiles. Differences in the electron and ion pedestal scalings are investigated. [1] C.S. Chang et al, Phys. Plasmas 11 (2004) 2649. [2] P.B. Snyder et al, Phys. Plasmas, 9 (2002) 2037.

  1. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less

  2. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less

  3. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    NASA Astrophysics Data System (ADS)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  4. Xanthogranulomatous cholecystitis: differentiation from associated gall bladder carcinoma.

    PubMed

    Rao, R V Raghavendra; Kumar, Ashok; Sikora, Sadiq S; Saxena, Rajan; Kapoor, Vinay K

    2005-01-01

    Xanthogranulomatous cholecystitis (XGC) is a destructive form of chronic cholecystitis. In some patients it coexists with gall bladder carcinoma (GBC) and is often difficult to differentiate between the two. Present study was performed with an aim to identify differentiating features of XGC and those of XGC with associated Gall bladder carcinoma (XGC ass. GBC). A retrospective analysis of prospectively maintained data of 4800 cholecystectomies performed from January 1988 to December 2003 was carried out. On histopathology 453 cholecystectomy specimens revealed XGC. These patients were divided into two groups, those with associated GBC (n=26) and those without GBC (n=427). Clinical, radiological and operative findings were compared in these two groups. P value of < 0.05 was considered statistically significant. The incidence of associated GBC in present series was 6%. XGC patients with associated GBC, at presentation were older than those with XGC alone and there was male preponderance. XGC patients with associated GBC were more likely to present with anorexia, weight loss, palpable lump and jaundice. Gall stones were present in majority of patients in both the groups. GB wall thickening, GB mass, enlarged abdominal lymph nodes may be found on imaging in both the groups but more so in patients with associated GBC. Both preoperative FNAC and peroperative FNAC/imprint cytology failed to reveal the associated GBC with XGC in some patients.

  5. Gyroaveraging operations using adaptive matrix operators

    NASA Astrophysics Data System (ADS)

    Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock

    2018-05-01

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.

  6. Neutral recycling effects on ITG turbulence

    DOE PAGES

    Stotler, D. P.; Lang, J.; Chang, C. S.; ...

    2017-07-04

    Here, the effects of recycled neutral atoms on tokamak ion temperature gradient (ITG) driven turbulence have been investigated in a steep edge pedestal, magnetic separatrix configuration, with the full-f edge gryokinetic code XGC1. An adiabatic electron model has been used; hence, the impacts of neutral particles and turbulence on the density gradient are not considered, nor are electromagnetic turbulence effects. The neutral atoms enhance the ITG turbulence, first, by increasing the ion temperature gradient in the pedestal via the cooling effects of charge exchange and, second, by a relative reduction in themore » $$E\\times B$$ shearing rate.« less

  7. Neutral recycling effects on ITG turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stotler, D. P.; Lang, J.; Chang, C. S.

    Here, the effects of recycled neutral atoms on tokamak ion temperature gradient (ITG) driven turbulence have been investigated in a steep edge pedestal, magnetic separatrix configuration, with the full-f edge gryokinetic code XGC1. An adiabatic electron model has been used; hence, the impacts of neutral particles and turbulence on the density gradient are not considered, nor are electromagnetic turbulence effects. The neutral atoms enhance the ITG turbulence, first, by increasing the ion temperature gradient in the pedestal via the cooling effects of charge exchange and, second, by a relative reduction in themore » $$E\\times B$$ shearing rate.« less

  8. Anomalous transport in the H-mode pedestal of Alcator C-Mod discharges

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Hughes, J. W.; Greenwald, M. J.; Kritz, A. H.; Rafiq, T.

    2017-02-01

    Anomalous transport in the H-mode pedestal region of five Alcator C-Mod discharges, representing a collisionality scan is analyzed. The understanding of anomalous transport in the pedestal region is important for the development of a comprehensive model for the H-mode pedestal slope. In this research, a possible role of the drift resistive inertial ballooning modes (Rafiq et al 2010 Phys. Plasmas 17 082511) in the edge of Alcator C-Mod discharges is analyzed. The stability analysis, carried out using the TRANSP code, indicates that the DRIBM modes are strongly unstable in Alcator C-Mod discharges with large electron collisionality. An improved interpretive analysis of H-mode pedestal experimental data is carried out utilizing the additive flux minimization technique (Pankin et al 2013 Phys. Plasmas 20 102501) together with the guiding-center neoclassical kinetic XGC0 code. The neoclassical and neutral physics are simulated in the XGC0 code and the anomalous fluxes are computed using the additive flux minimization technique. The anomalous fluxes are reconstructed and compared with each other for the collisionality scan Alcator C-Mod discharges. It is found that the electron thermal anomalous diffusivities at the pedestal top increase with the electron collisionality. This dependence can also point to the drift resistive inertial ballooning modes as the modes that drive the anomalous transport in the plasma edge of highly collisional discharges.

  9. Gyroaveraging operations using adaptive matrix operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less

  10. Gyroaveraging operations using adaptive matrix operators

    DOE PAGES

    Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock

    2018-05-17

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less

  11. Gyrokinetic simulation of edge blobs and divertor heat-load footprint

    NASA Astrophysics Data System (ADS)

    Chang, C. S.; Ku, S.; Hager, R.; Churchill, M.; D'Azevedo, E.; Worley, P.

    2015-11-01

    Gyrokinetic study of divertor heat-load width Lq has been performed using the edge gyrokinetic code XGC1. Both neoclassical and electrostatic turbulence physics are self-consistently included in the simulation with fully nonlinear Fokker-Planck collision operation and neutral recycling. Gyrokinetic ions and drift kinetic electrons constitute the plasma in realistic magnetic separatrix geometry. The electron density fluctuations from nonlinear turbulence form blobs, as similarly seen in the experiments. DIII-D and NSTX geometries have been used to represent today's conventional and tight aspect ratio tokamaks. XGC1 shows that the ion neoclassical orbit dynamics dominates over the blob physics in setting Lq in the sample DIII-D and NSTX plasmas, re-discovering the experimentally observed 1/Ip type scaling. Magnitude of Lq is in the right ballpark, too, in comparison with experimental data. However, in an ITER standard plasma, XGC1 shows that the negligible neoclassical orbit excursion effect makes the blob dynamics to dominate Lq. Differently from Lq 1mm (when mapped back to outboard midplane) as was predicted by simple-minded extrapolation from the present-day data, XGC1 shows that Lq in ITER is about 1 cm that is somewhat smaller than the average blob size. Supported by US DOE and the INCITE program.

  12. Cross-verification of the GENE and XGC codes in preparation for their coupling

    NASA Astrophysics Data System (ADS)

    Jenko, Frank; Merlo, Gabriele; Bhattacharjee, Amitava; Chang, Cs; Dominski, Julien; Ku, Seunghoe; Parker, Scott; Lanti, Emmanuel

    2017-10-01

    A high-fidelity Whole Device Model (WDM) of a magnetically confined plasma is a crucial tool for planning and optimizing the design of future fusion reactors, including ITER. Aiming at building such a tool, in the framework of the Exascale Computing Project (ECP) the two existing gyrokinetic codes GENE (Eulerian delta-f) and XGC (PIC full-f) will be coupled, thus enabling to carry out first principle kinetic WDM simulations. In preparation for this ultimate goal, a benchmark between the two codes is carried out looking at ITG modes in the adiabatic electron limit. This verification exercise is also joined by the global Lagrangian PIC code ORB5. Linear and nonlinear comparisons have been carried out, neglecting for simplicity collisions and sources. A very good agreement is recovered on frequency, growth rate and mode structure of linear modes. A similarly excellent agreement is also observed comparing the evolution of the heat flux and of the background temperature profile during nonlinear simulations. Work supported by the US DOE under the Exascale Computing Project (17-SC-20-SC).

  13. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov

    2016-06-15

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less

  14. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE PAGES

    Hager, Robert; Yoon, E. S.; Ku, S.; ...

    2016-04-04

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less

  15. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is amore » powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After recent optimizations to its collision kernel [4], most of the computing time is spent in the electron push (pushe) kernel, where these optimization efforts have been focused. The kernel code scaled well with MPI+OpenMP but had almost no automatic compiler vectorization, in part due to indirect memory addresses and in part due to low trip counts of low-level loops that would be candidates for vectorization. Particle blocking and sorting have been implemented to increase trip counts of low-level loops and improve memory locality, and OpenMP directives have been added to vectorize compute-intensive loops that were identified by Advisor. The optimizations have improved the performance of the pushe kernel 2x on Haswell processors and 1.7x on KNL. The KNL node-for-node performance has been brought to within 30% of a NERSC Cori phase I Haswell node and we expect to bridge this gap by reducing the memory footprint of compute intensive routines to improve cache reuse. ## PICSAR is a Fortran/Python high-performance Particle-In-Cell library targeting at MIC architectures first designed to be coupled with the PIC code WARP for the simulation of laser-matter interaction and particle accelerators. PICSAR also contains a FORTRAN stand-alone kernel for performance studies and benchmarks. A MPI domain decomposition is used between NUMA domains and a tile decomposition (cache-blocking) handled by OpenMP has been added for shared-memory parallelism and better cache management. The so-called current deposition and field gathering steps that compose the PIC time loop constitute major hotspots that have been rewritten to enable more efficient vectorization. Particle communications between tiles and MPI domain has been merged and parallelized. All considered, these improvements provide speedups of 3.1 for order 1 and 4.6 for order 3 interpolation shape factors on KNL configured in SNC4 quadrant flat mode. Performance is similar between a node of cori phase 1 and KNL at order 1 and better on KNL by a factor 1.6 at order 3 with the considered test case (homogeneous thermal plasma).« less

  16. Kinetic neoclassical calculations of impurity radiation profiles

    DOE PAGES

    Stotler, D. P.; Battaglia, D. J.; Hager, R.; ...

    2016-12-30

    Modifications of the drift-kinetic transport code XGC0 to include the transport, ionization, and recombination of individual charge states, as well as the associated radiation, are described. The code is first applied to a simulation of an NSTX H-mode discharge with carbon impurity to demonstrate the approach to coronal equilibrium. The effects of neoclassical phenomena on the radiated power profile are examined sequentially through the activation of individual physics modules in the code. Orbit squeezing and the neoclassical inward pinch result in increased radiation for temperatures above a few hundred eV and changes to the ratios of charge state emissions atmore » a given electron temperature. As a result, analogous simulations with a neon impurity yield qualitatively similar results.« less

  17. Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic

    2015-11-01

    The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.

  18. Investigation of the plasma shaping effects on the H-mode pedestal structure using coupled kinetic neoclassical/MHD stability simulations

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Rafiq, T.; Kritz, A. H.; Park, G. Y.; Snyder, P. B.; Chang, C. S.

    2017-06-01

    The effects of plasma shaping on the H-mode pedestal structure are investigated. High fidelity kinetic simulations of the neoclassical pedestal dynamics are combined with the magnetohydrodynamic (MHD) stability conditions for triggering edge localized mode (ELM) instabilities that limit the pedestal width and height in H-mode plasmas. The neoclassical kinetic XGC0 code [Chang et al., Phys. Plasmas 11, 2649 (2004)] is used in carrying out a scan over plasma elongation and triangularity. As plasma profiles evolve, the MHD stability limits of these profiles are analyzed with the ideal MHD ELITE code [Snyder et al., Phys. Plasmas 9, 2037 (2002)]. Simulations with the XGC0 code, which includes coupled ion-electron dynamics, yield predictions for both ion and electron pedestal profiles. The differences in the predicted H-mode pedestal width and height for the DIII-D discharges with different elongation and triangularities are discussed. For the discharges with higher elongation, it is found that the gradients of the plasma profiles in the H-mode pedestal reach semi-steady states. In these simulations, the pedestal slowly continues to evolve to higher pedestal pressures and bootstrap currents until the peeling-ballooning stability conditions are satisfied. The discharges with lower elongation do not reach the semi-steady state, and ELM crashes are triggered at earlier times. The plasma elongation is found to have a stronger stabilizing effect than the plasma triangularity. For the discharges with lower elongation and lower triangularity, the ELM frequency is large, and the H-mode pedestal evolves rapidly. It is found that the temperature of neutrals in the scrape-off-layer (SOL) region can affect the dynamics of the H-mode pedestal buildup. However, the final pedestal profiles are nearly independent of the neutral temperature. The elongation and triangularity affect the pedestal widths of plasma density and electron temperature profiles differently. This provides a new mechanism of controlling the pedestal bootstrap current and the pedestal stability.

  19. Investigation of the plasma shaping effects on the H-mode pedestal structure using coupled kinetic neoclassical/MHD stability simulations

    DOE PAGES

    Pankin, A. Y.; Rafiq, T.; Kritz, A. H.; ...

    2017-06-08

    The effects of plasma shaping on the H-mode pedestal structure are investigated. High fidelity kinetic simulations of the neoclassical pedestal dynamics are combined with the magnetohydrodynamic (MHD) stability conditions for triggering edge localized mode (ELM) instabilities that limit the pedestal width and height in H-mode plasmas. We use the neoclassical kinetic XGC0 code [Chang et al., Phys. Plasmas 11, 2649 (2004)] to carry out a scan over plasma elongation and triangularity. As plasma profiles evolve, the MHD stability limits of these profiles are analyzed with the ideal MHD ELITE code [Snyder et al., Phys. Plasmas 9, 2037 (2002)]. In simulationsmore » with the XGC0 code, which includes coupled ion-electron dynamics, yield predictions for both ion and electron pedestal profiles. The differences in the predicted H-mode pedestal width and height for the DIII-D discharges with different elongation and triangularities are discussed. For the discharges with higher elongation, it is found that the gradients of the plasma profiles in the H-mode pedestal reach semi-steady states. In these simulations, the pedestal slowly continues to evolve to higher pedestal pressures and bootstrap currents until the peeling-ballooning stability conditions are satisfied. The discharges with lower elongation do not reach the semi-steady state, and ELM crashes are triggered at earlier times. The plasma elongation is found to have a stronger stabilizing effect than the plasma triangularity. For the discharges with lower elongation and lower triangularity, the ELM frequency is large, and the H-mode pedestal evolves rapidly. It is found that the temperature of neutrals in the scrape-off-layer (SOL) region can affect the dynamics of the H-mode pedestal buildup. But the final pedestal profiles are nearly independent of the neutral temperature. The elongation and triangularity affect the pedestal widths of plasma density and electron temperature profiles differently. This provides a new mechanism of controlling the pedestal bootstrap current and the pedestal stability.« less

  20. Investigation of the plasma shaping effects on the H-mode pedestal structure using coupled kinetic neoclassical/MHD stability simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankin, A. Y.; Rafiq, T.; Kritz, A. H.

    The effects of plasma shaping on the H-mode pedestal structure are investigated. High fidelity kinetic simulations of the neoclassical pedestal dynamics are combined with the magnetohydrodynamic (MHD) stability conditions for triggering edge localized mode (ELM) instabilities that limit the pedestal width and height in H-mode plasmas. We use the neoclassical kinetic XGC0 code [Chang et al., Phys. Plasmas 11, 2649 (2004)] to carry out a scan over plasma elongation and triangularity. As plasma profiles evolve, the MHD stability limits of these profiles are analyzed with the ideal MHD ELITE code [Snyder et al., Phys. Plasmas 9, 2037 (2002)]. In simulationsmore » with the XGC0 code, which includes coupled ion-electron dynamics, yield predictions for both ion and electron pedestal profiles. The differences in the predicted H-mode pedestal width and height for the DIII-D discharges with different elongation and triangularities are discussed. For the discharges with higher elongation, it is found that the gradients of the plasma profiles in the H-mode pedestal reach semi-steady states. In these simulations, the pedestal slowly continues to evolve to higher pedestal pressures and bootstrap currents until the peeling-ballooning stability conditions are satisfied. The discharges with lower elongation do not reach the semi-steady state, and ELM crashes are triggered at earlier times. The plasma elongation is found to have a stronger stabilizing effect than the plasma triangularity. For the discharges with lower elongation and lower triangularity, the ELM frequency is large, and the H-mode pedestal evolves rapidly. It is found that the temperature of neutrals in the scrape-off-layer (SOL) region can affect the dynamics of the H-mode pedestal buildup. But the final pedestal profiles are nearly independent of the neutral temperature. The elongation and triangularity affect the pedestal widths of plasma density and electron temperature profiles differently. This provides a new mechanism of controlling the pedestal bootstrap current and the pedestal stability.« less

  1. Gyrokinetic projection of the divertor heat-flux width from present tokamaks to ITER

    DOE PAGES

    Chang, Choong Seock; Ku, Seung -Hoe; Loarte, Alberto; ...

    2017-07-11

    Here, the XGC1 edge gyrokinetic code is used to study the width of the heat-flux to divertor plates in attached plasma condition. The flux-driven simulation is performed until an approximate power balance is achieved between the heat-flux across the steep pedestal pressure gradient and the heat-flux on the divertor plates.

  2. Xanthogranulomatous cholecystitis: a European and global perspective

    PubMed Central

    Hale, Matthew David; Roberts, Keith J; Hodson, James; Scott, Nigel; Sheridan, Maria; Toogood, Giles J

    2014-01-01

    Introduction Xanthogranulomatous cholecystitis (XGC) is often mistaken for, and may predispose to, gallbladder carcinoma (GB Ca). This study reviews the worldwide variation of the incidence, investigations, management and outcome of patients with XGC. Methods Data from 29 studies, cumulatively containing 1599 patients, were reviewed and results summarized by geographical region (Europe, India, Far East and Americas) with 95% confidence intervals (CIs) to present variability within regions. The main study outcomes were incidence, association with GB Ca and treatment of patients with XGC. Results Overall, the incidence of XGC was 1.3–1.9%, with the exception of India where it was 8.8%. The incidence of GB Ca associated with XGC was lowest in European studies (3.3%) varying from 5.1–5.9% in the remaining regions. Confusion with or undiagnosed GB Ca led to 10.2% of patients receiving over or under treatment. Conclusions XGC is a global disease and is associated with GB Ca. Characteristic pathological, radiological and clinical features are shared with GB Ca and contribute to considerable treatment inaccuracy. Tissue sampling by pre-operative endoscopic ultrasound or intra-operative frozen section is required to accurately diagnose gallbladder pathology and should be performed before any extensive resection is performed. PMID:23991684

  3. Full-f XGC1 gyrokinetic study of improved ion energy confinement from impurity stabilization of ITG turbulence

    NASA Astrophysics Data System (ADS)

    Kim, Kyuho; Kwon, Jae-Min; Chang, C. S.; Seo, Janghoon; Ku, S.; Choe, W.

    2017-06-01

    Flux-driven full-f gyrokinetic simulations are performed to study carbon impurity effects on the ion temperature gradient (ITG) turbulence and ion thermal transport in a toroidal geometry. Employing the full-f gyrokinetic code XGC1, both main ions and impurities are evolved self-consistently including turbulence and neoclassical physics. It is found that the carbon impurity profile self-organizes to form an inwardly peaked density profile, which weakens the ITG instabilities and reduces the overall fluctuations and ion thermal transport. A stronger reduction appears in the low frequency components of the fluctuations. The global structure of E × B flow also changes, resulting in the reduction of global avalanche like transport events in the impure plasma. Detailed properties of impurity transport are also studied, and it is revealed that both the inward neoclassical pinch and the outward turbulent transport are equally important in the formation of the steady state impurity profile.

  4. Involvement of Escherichia coli in pathogenesis of xanthogranulomatous cholecystitis with scavenger receptor class A and CXCL16-CXCR6 interaction.

    PubMed

    Sawada, Seiko; Harada, Kenichi; Isse, Kumiko; Sato, Yasunori; Sasaki, Motoko; Kaizaki, Yasuharu; Nakanuma, Yasuni

    2007-10-01

    Xanthogranulomatous cholecystitis (XGC) is characterized by the infiltration of numerous foamy macrophages. Bacterial infection is thought to be involved in the pathogenesis of XGC. Using XGC and cultured murine biliary epithelial cells (BEC), the participation of E. coli and the role of the scavenger receptor class A (SCARA), as well as chemokine(C-X-C motif) ligand 16 (CXCL16) and its receptor chemokine(C-X-C motif) receptor 6 (CXCR6), were examined in the pathogenesis of XGC. E. coli components and genes were detected in XGC on immunohistochemistry and polymerase chain reaction (PCR), respectively. SCARA-recognizing E. coli was found in foamy macrophages aggregated in xanthogranulomatous lesions. CXCL16, which functions as a membrane-bound molecule and soluble chemokine to induce adhesion and migration of CXCR6(+) cells, was detected on gallbladder epithelia, and CXCR6(+)/CD8(+) T cells and CXCR6(+)/CD68(+) macrophages were also accumulated. In cultured BEC, CXCL16 mRNA and secreted soluble CXCL16 were constantly detected and upregulated by treatment with E. coli and lipopolysaccharide through Toll-like receptor 4. These suggest that SCARA in macrophages is involved in the phagocytosis of E. coli followed by foamy changes and that bacterial infection causes the upregulation of CXCL16 in gallbladder epithelia, leading to the chemoattraction of macrophages via CXCL16-CXCR6 interaction and formation of the characteristic histology of XGC.

  5. Optimizing fusion PIC code performance at scale on Cori Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, T. S.; Deslippe, J.

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less

  6. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE PAGES

    Ku, S.; Hager, R.; Chang, C. S.; ...

    2016-04-01

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  7. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S.; Hager, R.; Chang, C. S.

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  8. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning.

  10. Synthetic diagnostics platform for fusion plasmas (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L., E-mail: lshi@pppl.gov; Valeo, E. J.; Tobias, B. J.

    A Synthetic Diagnostics Platform (SDP) for fusion plasmas has been developed which provides state of the art synthetic reflectometry, beam emission spectroscopy, and Electron Cyclotron Emission (ECE) diagnostics. Interfaces to the plasma simulation codes GTC, XGC-1, GTS, and M3D-C{sup 1} are provided, enabling detailed validation of these codes. In this paper, we give an overview of SDP’s capabilities, and introduce the synthetic diagnostic modules. A recently developed synthetic ECE Imaging module which self-consistently includes refraction, diffraction, emission, and absorption effects is discussed in detail. Its capabilities are demonstrated on two model plasmas. The importance of synthetic diagnostics in validation ismore » shown by applying the SDP to M3D-C{sup 1} output and comparing it with measurements from an edge harmonic oscillation mode on DIII-D.« less

  11. Synthetic diagnostics platform for fusion plasmas (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L.; Valeo, E. J.; Tobias, B. J.

    A Synthetic Diagnostics Platform (SDP) for fusion plasmas has been developed which provides state of the art synthetic reflectometry, beam emission spectroscopy, and Electron Cyclotron Emission (ECE) diagnostics. Interfaces to the plasma simulation codes GTC, XGC-1, GTS, and M3D-C-1 are provided, enabling detailed validation of these codes. In this paper, we give an overview of SDP's capabilities, and introduce the synthetic diagnostic modules. A recently developed synthetic ECE Imaging module which self-consistently includes refraction, diffraction, emission, and absorption effects is discussed in detail. Its capabilities are demonstrated on two model plasmas. Finally, the importance of synthetic diagnostics in validation ismore » shown by applying the SDP to M3D-C 1 output and comparing it with measurements from an edge harmonic oscillation mode on DIII-D.« less

  12. Synthetic diagnostics platform for fusion plasmas (invited)

    DOE PAGES

    Shi, L.; Valeo, E. J.; Tobias, B. J.; ...

    2016-08-26

    A Synthetic Diagnostics Platform (SDP) for fusion plasmas has been developed which provides state of the art synthetic reflectometry, beam emission spectroscopy, and Electron Cyclotron Emission (ECE) diagnostics. Interfaces to the plasma simulation codes GTC, XGC-1, GTS, and M3D-C-1 are provided, enabling detailed validation of these codes. In this paper, we give an overview of SDP's capabilities, and introduce the synthetic diagnostic modules. A recently developed synthetic ECE Imaging module which self-consistently includes refraction, diffraction, emission, and absorption effects is discussed in detail. Its capabilities are demonstrated on two model plasmas. Finally, the importance of synthetic diagnostics in validation ismore » shown by applying the SDP to M3D-C 1 output and comparing it with measurements from an edge harmonic oscillation mode on DIII-D.« less

  13. Kinetic neoclassical transport in the H-mode pedestal

    DOE PAGES

    Battaglia, D. J.; Burrell, K. H.; Chang, C. S.; ...

    2014-07-16

    Multi-species kinetic neoclassical transport through the QH-mode pedestal and scrapeoff layer on DIII-D is calculated using XGC0, a 5D full-f particle-in-cell drift-kinetic solver with self-consistent neutral recycling and sheath potentials. We achieved quantitative agreement between the fluxdriven simulation and the experimental electron density, impurity density and orthogonal measurements of impurity temperature and flow profiles by adding random-walk particle diffusion to the guiding-center drift motion. Furthermore, we computed the radial electric field (Er) that maintains ambipolar transport across flux surfaces and to the wall self-consistently on closed and open magnetic field lines, and is in excellent agreement with experiment. The Ermore » inside the separatrix is the unique solution that balances the outward flux of thermal tail deuterium ions against the outward neoclassical electron flux and inward pinch of impurity and colder deuterium ions. Particle transport in the pedestal is primarily due to anomalous transport, while the ion heat and momentum transport is primarily due to the neoclassical transport. The full-f treatment quantifies the non-Maxwellian energy distributions that describe a number of experimental observations in low-collisionallity pedestals on DIII-D, including intrinsic co-Ip parallel flows in the pedestal, ion temperature anisotropy and large impurity temperatures in the scrape-off layer.« less

  14. Partnership for Edge Physics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kritz, Arnold H.; Rafiq, Tariq

    A major goal of our participation in the Edge Physics Simulation project has been to contribute to the understanding of the self-organization of tokamak turbulence fluctuations resulting in the formation of a staircase structure in the ion temperature. A second important goal is to demonstrate how small scale turbulence in plasmas self-organizes with dynamically driven quasi-stationary flow shear. These goals have been accomplished through the analyses of the statistical properties of XGC1 flux driven Gyrokinetic electrostatic ion temperature gradient (ITG) turbulence simulation data in which neutrals are included. The ITG turbulence data, and in particular fluctuation data, were obtained frommore » a massively parallel flux-driven gyrokinetic full-f particle-in-cell simulation of a DIII-D like equilibrium. Below some the findings are summarized. It was observed that the emergence of staircase structure is related to the variations in the normalized temperature gradient length (R/LT) and the poloidal flow shear. Average turbulence intensity is found to be large in the vicinity of minima in R/LTi, where ITG growth is expected to be lower. The distributions of the occurrences of potential fluctuation are found to be Gaussian away from the staircase-step locations, but they are found to be non-Gaussian in the vicinity of staircase-step locations. The results of analytically derived expressions for the distribution of the occurrences of turbulence intensity and intensity flux were compared with the corresponding quantities computed using XGC1 simulation data and good agreement is found. The derived expressions predicts inward and outward propagation of turbulence intensity flux in an intermittent fashion. The outward propagation of turbulence intensity flux occurs at staircase-step locations and is related to the change in poloidal flow velocity shear and to the change in the ion temperature gradient. The standard deviation, skewness and kurtosis for turbulence quantities were computed and found to be large in the vicinity of the staircase-step structures. Large values of skewness and kurtosis can be explained by a temporary opening and closing of the structure which allows turbulence intensity events to propagate. The staircase patterns may reduce the ion heat transport and a manipulation of these patterns may be used to optimize heat transport in tokamaks. An additional objective of the research in support of the Edge Physics Simulation initiative has been to improve the understanding of scrape-off layer thermal transport. In planning experiments and designing future tokamaks, it is important to understand the physical effects that contribute to divertor heat-load fluxes. The research accomplished will contribute to developing new models for the scrape-off layer region. The XGC0 code was used to compute the heat fluxes and the heat-load width in the outer divertor plates of C-Mod and DIII-D tokamaks. It was observed that the width of the XGC0 neoclassical heat-load was approximately inversely proportional to the total plasma current. Anomalous transport in the H-mode pedestal region of five Alcator C-Mod discharges, representing a collisionality scan, was analyzed. The understanding of anomalous transport in the pedestal region is important for the development of a comprehensive model for the H-mode pedestal slope. It was found that the electron thermal anomalous diffusivities at the pedestal top increase with the electron collisionality. This dependence can point to the DRIBM as the modes that drive the anomalous transport in the plasma edge of highly collisional discharges. The effects of plasma shaping on the H-mode pedestal structure was also investigated. The differences in the predicted H-mode pedestal width and height for the DIII-D discharges with different elongation and triangularities were discussed. For the discharges with higher elongation, it was found that the gradients of the plasma profiles in the H-mode pedestal reach semi-steady states. In these simulations, the pedestal slowly continued to evolve to higher pedestal pressures and bootstrap currents until the peeling ballooning stability conditions were satisfied. The discharges with lower elongation do not reach the semi-steady state, and ELM crashes were triggered at earlier times. The plasma elongation was found to have a stronger stabilizing effect than the plasma triangularity. For the discharges with lower elongation and lower triangularity, the ELM frequency was large, and the H-mode pedestal evolves rapidly. It was found that the temperature of neutrals in the scrape-off-layer region can affect the dynamics of the H-mode pedestal buildup. However, the final pedestal profiles were nearly independent of the neutral temperature. The elongation and triangularity affected the pedestal widths of plasma density and electron temperature profiles differently. This study illustrated a new mechanism for controlling the pedestal bootstrap current and the pedestal stability.« less

  15. Bootstrap Current for the Edge Pedestal Plasma in a Diverted Tokamak Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koh, S.; Chang, C. S.; Ku, S.

    The edge bootstrap current plays a critical role in the equilibrium and stability of the steep edge pedestal plasma. The pedestal plasma has an unconventional and difficult neoclassical property, as compared with the core plasma. It has a narrow passing particle region in velocity space that can be easily modified or destroyed by Coulomb collisions. At the same time, the edge pedestal plasma has steep pressure and electrostatic potential gradients whose scale-lengths are comparable with the ion banana width, and includes a magnetic separatrix surface, across which the topological properties of the magnetic field and particle orbits change abruptly. Amore » driftkinetic particle code XGC0, equipped with a mass-momentum-energy conserving collision operator, is used to study the edge bootstrap current in a realistic diverted magnetic field geometry with a self-consistent radial electric field. When the edge electrons are in the weakly collisional banana regime, surprisingly, the present kinetic simulation confirms that the existing analytic expressions [represented by O. Sauter et al. , Phys. Plasmas 6 , 2834 (1999)] are still valid in this unconventional region, except in a thin radial layer in contact with the magnetic separatrix. The agreement arises from the dominance of the electron contribution to the bootstrap current compared with ion contribution and from a reasonable separation of the trapped-passing dynamics without a strong collisional mixing. However, when the pedestal electrons are in plateau-collisional regime, there is significant deviation of numerical results from the existing analytic formulas, mainly due to large effective collisionality of the passing and the boundary layer trapped particles in edge region. In a conventional aspect ratio tokamak, the edge bootstrap current from kinetic simulation can be significantly less than that from the Sauter formula if the electron collisionality is high. On the other hand, when the aspect ratio is close to unity, the collisional edge bootstrap current can be significantly greater than that from the Sauter formula. Rapid toroidal rotation of the magnetic field lines at the high field side of a tight aspect-ratio tokamak is believed to be the cause of the different behavior. A new analytic fitting formula, as a simple modification to the Sauter formula, is obtained to bring the analytic expression to a better agreement with the edge kinetic simulation results« less

  16. Bootstrap current for the edge pedestal plasma in a diverted tokamak geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koh, S.; Choe, W.; Chang, C. S.

    The edge bootstrap current plays a critical role in the equilibrium and stability of the steep edge pedestal plasma. The pedestal plasma has an unconventional and difficult neoclassical property, as compared with the core plasma. It has a narrow passing particle region in velocity space that can be easily modified or destroyed by Coulomb collisions. At the same time, the edge pedestal plasma has steep pressure and electrostatic potential gradients whose scale-lengths are comparable with the ion banana width, and includes a magnetic separatrix surface, across which the topological properties of the magnetic field and particle orbits change abruptly. Amore » drift-kinetic particle code XGC0, equipped with a mass-momentum-energy conserving collision operator, is used to study the edge bootstrap current in a realistic diverted magnetic field geometry with a self-consistent radial electric field. When the edge electrons are in the weakly collisional banana regime, surprisingly, the present kinetic simulation confirms that the existing analytic expressions [represented by O. Sauter et al., Phys. Plasmas 6, 2834 (1999)] are still valid in this unconventional region, except in a thin radial layer in contact with the magnetic separatrix. The agreement arises from the dominance of the electron contribution to the bootstrap current compared with ion contribution and from a reasonable separation of the trapped-passing dynamics without a strong collisional mixing. However, when the pedestal electrons are in plateau-collisional regime, there is significant deviation of numerical results from the existing analytic formulas, mainly due to large effective collisionality of the passing and the boundary layer trapped particles in edge region. In a conventional aspect ratio tokamak, the edge bootstrap current from kinetic simulation can be significantly less than that from the Sauter formula if the electron collisionality is high. On the other hand, when the aspect ratio is close to unity, the collisional edge bootstrap current can be significantly greater than that from the Sauter formula. Rapid toroidal rotation of the magnetic field lines at the high field side of a tight aspect-ratio tokamak is believed to be the cause of the different behavior. A new analytic fitting formula, as a simple modification to the Sauter formula, is obtained to bring the analytic expression to a better agreement with the edge kinetic simulation results.« less

  17. Collisionality and temperature dependence of the edge main-ion co-current rotation profile feature on DIII-D

    NASA Astrophysics Data System (ADS)

    Haskey, Shaun; Grierson, Brian; Ashourvan, Arash; Battaglia, Devon; Chrystal, Colin; Burrell, Keith; Groebner, Richard; Degrassie, John; Stagner, Luke; Stoltzfus-Dueck, Timothy; Pablant, Novimir

    2017-10-01

    A new edge main-ion (D+) CER system and upgraded edge impurity system are revealing clear differences between the main-ion and dominant impurity (C6+) toroidal rotation from the pedestal top to the scrape off layer on DIII-D with implications for intrinsic rotation studies. A peaked co-current edge toroidal rotation is observed for the main ion species near the outboard midplane separatrix with values up to 140 km/s for low collisionality QH modes. In lower power (PNBI = 0.8MW) H-modes the edge rotation is still present but reduced to 50km/s. D+ and C6+ toroidal rotation differences are presented for a variety of scenarios covering a significant range of edge collisionality and Ti. Observations are compared with predictions from several models including collisionless ion orbit loss calculations and more complete modeling using the XGC0 code, which also predicts 140km/s edge rotation for low collisionality QH mode cases. Work supported by the U.S. DOE under DE-AC02-09CH11466, No. DE-FC02-04ER54698, and DE-FC02-95ER54309.

  18. Direct measurements and comparisons between deuterium and impurity rotation and density profiles in the H-mode steep gradient region on DIII-D

    NASA Astrophysics Data System (ADS)

    Haskey, S. R.; Grierson, B. A.; Chrystal, C.; Stagner, L.; Burrell, K.; Groebner, R. J.; Kaplan, D. H.; Nazikian, R.

    2016-10-01

    The recently commissioned edge deuterium charge exchange recombination (CER) spectroscopy diagnostic on DIII-D is providing direct measurements of the deuterium rotation, temperature, and density in H-mode pedestals. The deuterium temperature and temperature scale length can be 50 % lower than the carbon measurement in the gradient region of the pedestal, indicating that the ion pedestal pressure can deviate significantly from that inferred from carbon CER. In addition, deuterium exhibits a larger toroidal rotation in the co-Ip direction near the separatrix compared with the carbon. These differences are qualitatively consistent with theory-based models that identify thermal ion orbit loss across the separatrix as a source of intrinsic angular momentum. The first direct measurements of the deuterium density pedestal profile show an inward shift of the impurity pedestal compared with the main ions, validating neoclassical predictions from the XGC0 code. Work supported by the U.S. DOE under DE-FC02-04ER54698 and DE-AC02-09CH11466.

  19. Improved kinetic neoclassical transport calculation for a low-collisionality QH-mode pedestal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaglia, D. J.; Burrell, K. H.; Chang, C. S.

    The role of neoclassical, anomalous and neutral transport to the overall H-mode pedestal and scrape-off layer (SOL) structure in an ELM-free QH-mode discharge on DIII-D is explored using XGC0, a 5D full-f multi-species particle-in-cell drift-kinetic solver with self-consistent neutral recycling and sheath potentials. The work in this paper builds on previous work aimed at achieving quantitative agreement between the flux-driven simulation and the experimental electron density, impurity density and orthogonal measurements of impurity temperature and flow profiles. Improved quantitative agreement is achieved by performing the calculations with a more realistic electron mass, larger neutral density and including finite-Larmor-radius corrections self-consistentlymore » in the drift-kinetic motion of the particles. Consequently, the simulations provide stronger evidence that the radial electric field (E r) in the pedestal is primarily established by the required balance between the loss of high-energy tail main ions against a pinch of colder main ions and impurities. The kinetic loss of a small population of ions carrying a large proportion of energy and momentum leads to a separation of the particle and energy transport rates and introduces a source of intrinsic edge torque. Ion orbit loss and finite orbit width effects drive the energy distributions away from Maxwellian, and describe the anisotropy, poloidal asymmetry and local minimum near the separatrix observed in the T i profile.« less

  20. Improved kinetic neoclassical transport calculation for a low-collisionality QH-mode pedestal

    DOE PAGES

    Battaglia, D. J.; Burrell, K. H.; Chang, C. S.; ...

    2016-07-15

    The role of neoclassical, anomalous and neutral transport to the overall H-mode pedestal and scrape-off layer (SOL) structure in an ELM-free QH-mode discharge on DIII-D is explored using XGC0, a 5D full-f multi-species particle-in-cell drift-kinetic solver with self-consistent neutral recycling and sheath potentials. The work in this paper builds on previous work aimed at achieving quantitative agreement between the flux-driven simulation and the experimental electron density, impurity density and orthogonal measurements of impurity temperature and flow profiles. Improved quantitative agreement is achieved by performing the calculations with a more realistic electron mass, larger neutral density and including finite-Larmor-radius corrections self-consistentlymore » in the drift-kinetic motion of the particles. Consequently, the simulations provide stronger evidence that the radial electric field (E r) in the pedestal is primarily established by the required balance between the loss of high-energy tail main ions against a pinch of colder main ions and impurities. The kinetic loss of a small population of ions carrying a large proportion of energy and momentum leads to a separation of the particle and energy transport rates and introduces a source of intrinsic edge torque. Ion orbit loss and finite orbit width effects drive the energy distributions away from Maxwellian, and describe the anisotropy, poloidal asymmetry and local minimum near the separatrix observed in the T i profile.« less

  1. Nonlinear Two Fluid and Kinetic ELM Simulations

    NASA Astrophysics Data System (ADS)

    Strauss, H. R.; Sugiyama, L.; Chang, C. S.; Ku, S.; Hientzsch, B.; Breslau, J.; Park, W.; Samtaney, R.; Adams, M.; Jardin, S.

    2006-04-01

    Simulations of ELMs using dissipative MHD, two fluid MHD, and neoclassical kinetic physics models are being carried out using the M3D code [1]. Resistive MHD simulations of nonlinear edge pressure and current driven instabilities have been performed, initialized with realistic DIIID equilibria. Simulations show the saturation of the modes and relaxation of equilbrium profiles. Linear simulations including two fluid effects show the stabilization of toroidal mode number n = 10 modes, when the Hall parameter H, the ratio of ion skin depth to major radius, exceeds a threshhold. Nonlinear simulations are being done including gyroviscous stabilization. Kinetic effects are incorporated by coupling with the XGC code [2], which is able to simulate the edge plasma density and pressure pedestal buildup. These profiles are being used to initialize M3D simulations of an ELM crash and pedestal relaxation. The goal is to simulate an ELM cycle. [1] Park, W., Belova, E.V., Fu, G.Y., Tang, X.Z., Strauss, H.R., Sugiyama, L.E., Phys. Plas. 6, 1796 (1999).[2] Chang, C.S., Ku, S., and Weitzner, H., Phys. Plas. 11, 2649 (2004)

  2. Pedestal and edge electrostatic turbulence characteristics from an XGC1 gyrokinetic simulation

    NASA Astrophysics Data System (ADS)

    Churchill, R. M.; Chang, C. S.; Ku, S.; Dominski, J.

    2017-10-01

    Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer (SOL)) is required in order to reliably predict performance in future fusion devices. We explore turbulent characteristics in the edge region from a multi-scale neoclassical and turbulent XGC1 gyrokinetic simulation in a DIII-D like tokamak geometry, here excluding neutrals and collisions. For an H-mode type plasma with steep pedestal, it is found that the electron density fluctuations increase towards the separatrix, and stay high well into the SOL, reaching a maximum value of δ {n}e/{\\bar{n}}e˜ 0.18. Blobs are observed, born around the magnetic separatrix surface and propagate radially outward with velocities generally less than 1 km s-1. Strong poloidal motion of the blobs is also present, near 20 km s-1, consistent with E × B rotation. The electron density fluctuations show a negative skewness in the closed field-line pedestal region, consistent with the presence of ‘holes’, followed by a transition to strong positive skewness across the separatrix and into the SOL. These simulations indicate that not only neoclassical phenomena, but also turbulence, including the blob-generation mechanism, can remain important in the steep H-mode pedestal and SOL. Qualitative comparisons will be made to experimental observations.

  3. Pedestal and edge electrostatic turbulence characteristics from an XGC1 gyrokinetic simulation

    DOE PAGES

    Churchill, R. M.; Chang, C. S.; Ku, S.; ...

    2017-08-30

    Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer (SOL)) is required in order to reliably predict performance in future fusion devices. We explore turbulent characteristics in the edge region from a multi-scale neoclassical and turbulent XGC1 gyrokinetic simulation in a DIII-D like tokamak geometry, here excluding neutrals and collisions. For an H-mode type plasma with steep pedestal, it is found that the electron density fluctuations increase towards the separatrix, and stay high well into the SOL, reaching a maximum value ofmore » $$\\delta {n}_{e}/{\\bar{n}}_{e}\\sim 0.18$$. Blobs are observed, born around the magnetic separatrix surface and propagate radially outward with velocities generally less than 1 km s –1. Strong poloidal motion of the blobs is also present, near 20 km s –1, consistent with E × B rotation. The electron density fluctuations show a negative skewness in the closed field-line pedestal region, consistent with the presence of 'holes', followed by a transition to strong positive skewness across the separatrix and into the SOL. These simulations indicate that not only neoclassical phenomena, but also turbulence, including the blob-generation mechanism, can remain important in the steep H-mode pedestal and SOL. Lastly, qualitative comparisons will be made to experimental observations.« less

  4. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  5. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes

    NASA Astrophysics Data System (ADS)

    Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.

  6. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.

    PubMed

    Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  7. Sandia Simple Particle Tracking (Sandia SPT) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen M.

    2015-06-15

    Sandia SPT is designed as software to accompany a book chapter being published a methods chapter which provides an introduction on how to label and track individual proteins. The Sandia Simple Particle Tracking code uses techniques common to the image processing community, where its value is that it facilitates implementing the methods described in the book chapter by providing the necessary open-source code. The code performs single particle spot detection (or segmentation and localization) followed by tracking (or connecting the detected particles into trajectories). The book chapter, which along with the headers in each file, constitutes the documentation for themore » code is: Anthony, S.M.; Carroll-Portillo, A.; Timlon, J.A., Dynamics and Interactions of Individual Proteins in the Membrane of Living Cells. In Anup K. Singh (Ed.) Single Cell Protein Analysis Methods in Molecular Biology. Springer« less

  8. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  9. Flow Instability Tests for a Particle Bed Reactor Nuclear Thermal Rocket Fuel Element

    DTIC Science & Technology

    1993-05-01

    2.0 with GWBASIC or higher (DOS 5.0 was installed on the machine). Since the source code was written in BASIC, it was easy to make modifications...8217 AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release IAW 190-1 Distribution Unlimited MICHAEL M. BRICKER, SMSgt, USAF Chief...Administration 13. ABSTRACT (Maximum 200 words) i.14. SUBJECT TERMS 15. NUMBER OF PAGES 339 16. PRICE CODE 󈧕. SECURITY CLASSIFICATION 18. SECURITY

  10. YAP Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Eric M.

    2004-05-20

    The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.

  11. Modeling study of deposition locations in the 291-Z plenum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahoney, L.A.; Glissmeyer, J.A.

    The TEMPEST (Trent and Eyler 1991) and PART5 computer codes were used to predict the probable locations of particle deposition in the suction-side plenum of the 291-Z building in the 200 Area of the Hanford Site, the exhaust fan building for the 234-5Z, 236-Z, and 232-Z buildings in the 200 Area of the Hanford Site. The Tempest code provided velocity fields for the airflow through the plenum. These velocity fields were then used with TEMPEST to provide modeling of near-floor particle concentrations without particle sticking (100% resuspension). The same velocity fields were also used with PART5 to provide modeling ofmore » particle deposition with sticking (0% resuspension). Some of the parameters whose importance was tested were particle size, point of injection and exhaust fan configuration.« less

  12. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  13. Investigation of neutral particle dynamics in Aditya tokamak plasma with DEGAS2 code

    NASA Astrophysics Data System (ADS)

    Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.; Manchanda, R.; Banerjee, S.; Ramaiya, N.; Sharma, Deepti; Srinivasan, R.; Stotler, D. P.; Aditya Team

    2017-08-01

    Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated and matched with the experimental one. The dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7-40 eV).

  14. Investigation of neutral particle dynamics in Aditya tokamak plasma with DEGAS2 code

    DOE PAGES

    Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.; ...

    2017-06-09

    Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated which was matched with the experimental one. Themore » dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~ 2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7 to 40 eV).« less

  15. A 3D particle Monte Carlo approach to studying nucleation

    NASA Astrophysics Data System (ADS)

    Köhn, Christoph; Enghoff, Martin Bødker; Svensmark, Henrik

    2018-06-01

    The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate a swarm of sulphuric acid-water clusters with a size of 0.329 nm with densities between 107 and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate of clusters with a radius of 0.85 nm as a function of time, initial particle density and temperature. The nucleation rates obtained from the presented model agree well with experimentally obtained values and those of a numerical model which serves as a benchmark of our code. In contrast to previous nucleation models, we here present for the first time a code capable of tracing individual particles and thus of capturing the physics related to the discrete nature of particles.

  16. Plasma Rotation and Radial Electric Field Response to Resonant Magnetic Perturbations in DIII-D

    NASA Astrophysics Data System (ADS)

    Moyer, R. A.

    2012-10-01

    Analysis of DIII-D experiments have revealed a complex picture of the evolution of the toroidal rotation vtor and radial electric field Er when applying edge resonant magnetic perturbations (RMPs) in H-mode plasmas. Measurements indicate that RMPs induce changes to the plasma rotation and Er across the plasma profile, well into the plasma core where islands or stochasticity are not expected. In the pedestal, the change in Er comes primarily from the vxB changes even though the ion diamagnetic contribution to Er is larger. This allows the RMP to change Er faster than the transport timescale for altering the pressure gradient. For n=3 RMPs, the pedestal vtor goes to zero as fast as the RMP current rises, suggesting increased toroidal viscosity with the RMP, followed by a slow rise in co-plasma current vtor (pedestal ``spin-up'') as the pedestal density pumps out. This spin-up could result from a reduction in ELM-induced momentum transport or a resonant jxB torque due to radial current. As vtor becomes more positive and the pressure pedestal narrows, the electron perpendicular rotation ˜0 point moves out toward the top of the pedestal; increasing the RMP current moves this crossing point closer to the top of the pedestal. These changes reduce the mean ExB shearing rate across the outer half of the discharge from several times the linear growth rate for intermediate-scale turbulence to less than the linear growth rate, consistent with increased turbulent transport. Full-f kinetic simulations with self-consistent plasma response and Er using the XGC0 code have qualitatively reproduced the observed profile and Er changes. These results suggest that similar to their role in regulating H-mode plasma transport and stability, plasma rotation and Er play a critical role in the effect of RMPs on plasma performance.

  17. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  18. A class of ejecta transport test problems

    NASA Astrophysics Data System (ADS)

    Oro, David M.; Hammerberg, J. E.; Buttler, William T.; Mariam, Fesseha G.; Morris, Christopher L.; Rousculp, Chris; Stone, Joseph B.

    2012-03-01

    Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function of particulate masses and velocities, f0(m,u;t). Some properties of this source distribution function have been determined from Taylor- and supported-shockwave experiments. Such experiments measure the mass moment of f0 under vacuum conditions assuming weak particle-particle interactions and, usually, fully inelastic scattering (capture) of ejecta particles from piezoelectric diagnostic probes. Recently, planar ejection of W particles into vacuum, Ar, and Xe gas atmospheres have been carried out to provide benchmark transport data for transport model development and validation. We present those experimental results and compare them with modeled transport of the W-ejecta particles in Ar and Xe.

  19. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    DTIC Science & Technology

    2014-03-27

    VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6

  20. Ion Loss as an Intrinsic Momentum Source in Tokamaks

    NASA Astrophysics Data System (ADS)

    Boedo, J. A.

    2014-10-01

    A series of coupled experiments in DIII-D and simulations provide strong support for the kinetic loss of thermal ions from the edge as the mechanism for toroidal momentum generation in tokamaks. Measurements of the near-separatrix parallel velocity of D+ with Mach probes show a 1-2 cm wide D+ parallel velocity peak at the separatrix reaching 40-60 km/s, up to half the thermal velocity, always in the direction of the plasma current. The magnitude and width of the velocity layer are in excellent agreement with a first-principle, collissionless, kinetic computation of selective particle loss due to the loss cone including for the first time the measured radial electric field, Er in steady state. C6+ rotation in the core, measured with charge exchange recombination (CER) spectroscopy is correlated with the edge D+ velocity. XGC0 computations, which include collisions and kinetic ions and electrons, show results that agree with the measurements, and indicate that two mechanisms are relevant: 1) ion orbit loss and 2) a growing influence of the Pfirsch-Schluter mechanism in H-mode gradients. The inclusion of the measured Er in the loss-cone model drastically affects the width and magnitude of the velocity profile and improves agreement with the Mach probe measurements. A fine structure in Er is found, still of unknown origin, featuring large (10-20 kV/m) positive peaks in the SOL and at, or slightly inside, the separatrix of low power L- or H-mode conditions. This high resolution probe measurement of Er agrees with CER measurements where the techniques overlap. The flow is attenuated in higher collisionality conditions, consistent with a depleted loss-cone mechanism. Supported by the US DOE under DE-FG02-07ER54917, DE-FC02-08ER54977, & DE-FC02-04ER54698.

  1. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  2. Comparison study of the pT distributions of the charged particles in p-Pb interactions at LHC energies

    NASA Astrophysics Data System (ADS)

    Ali, Y.; Tabassam, U.; Suleymanov, M.; Bhatti, A. S.

    2017-10-01

    Transverse momentum (pT) distributions of primary charged particles were compared to simulations using the Ultra Relativistic Quantum Molecular Dynamics (UrQMD) transport model and the HIJING 1.0 model in minimum bias p-Pb collisions at sNN = 5.02TeV in the pseudorapidity (η) regions: η < 0.3, 0.3 < η < 0.8 and 0.8 < η < 1.3 and in the transverse momentum range 0.5 < pT < 20GeV/c. The simulated distributions were then compared with the ALICE data and it was observed that UrQMD predicts systematically higher yields than HIJING 1.0. Both codes cannot describe the experimental data in the range of 0.5 < pT < 20GeV/c, though in the region of pT > 5GeV/c the model predictions are very close to the experimental results for particles with |η| < 0.3, 0.3 < η < 0.8. The ratio of the yield at forward pseudorapidity to that at |η| < 0.3 was also studied. It was observed that the predictions of the models depend on η. In the experiment there is no essential difference of yields for particles from the intervals of |η| < 0.3, 0.3 < η < 0.8 and 0.8 < η < 1.3. The differences are significant for the models where the ratios are systematically less than 1. This means that the results are not connected to a medium effect but reflect the Cronin effect. We are led to conclude that the codes cannot take into account satisfactorily the leading effect due to the asymmetric p-Pb fragmentation.

  3. Charged particle transport in magnetic fields in EGSnrc.

    PubMed

    Malkov, V N; Rogers, D W O

    2016-07-01

    To accurately and efficiently implement charged particle transport in a magnetic field in EGSnrc and validate the code for the use in phantom and ion chamber simulations. The effect of the magnetic field on the particle motion and position is determined using one- and three-point numerical integrations of the Lorentz force on the charged particle and is added to the condensed history calculation performed by the EGSnrc PRESTA-II algorithm. The code is tested with a Fano test adapted for the presence of magnetic fields. The code is compatible with all EGSnrc based applications, including egs++. Ion chamber calculations are compared to experimental measurements and the effect of the code on the efficiency and timing is determined. Agreement with the Fano test's theoretical value is obtained at the 0.1% level for large step-sizes and in magnetic fields as strong as 5 T. The NE2571 dose calculations achieve agreement with the experiment within 0.5% up to 1 T beyond which deviations up to 1.2% are observed. Uniform air gaps of 0.5 and 1 mm and a misalignment of the incoming photon beam with the magnetic field are found to produce variations in the normalized dose on the order of 1%. These findings necessitate a clear definition of all experimental conditions to allow for accurate Monte Carlo simulations. It is found that ion chamber simulation times are increased by only 38%, and a 10 × 10 × 6 cm(3) water phantom with (3 mm)(3) voxels experiences a 48% increase in simulation time as compared to the default EGSnrc with no magnetic field. The incorporation of the effect of the magnetic fields in EGSnrc provides the capability to calculate high accuracy ion chamber and phantom doses for the use in MRI-radiation systems. Further, the effect of apparently insignificant experimental details is found to be accentuated by the presence of the magnetic field.

  4. What happens to full-f gyrokinetic transport and turbulence in a toroidal wedge simulation?

    DOE PAGES

    Kim, Kyuho; Chang, C. S.; Seo, Janghoon; ...

    2017-01-24

    Here, in order to save the computing time or to fit the simulation size into a limited computing hardware in a gyrokinetic turbulence simulation of a tokamak plasma, a toroidal wedge simulation may be utilized in which only a partial toroidal section is modeled with a periodic boundary condition in the toroidal direction. The most severe restriction in the wedge simulation is expected to be in the longest wavelength turbulence, i.e., ion temperature gradient (ITG) driven turbulence. The global full-f gyrokinetic code XGC1 is used to compare the transport and turbulence properties from a toroidal wedge simulation against the fullmore » torus simulation in an ITG unstable plasma in a model toroidal geometry. It is found that (1) the convergence study in the wedge number needs to be conducted all the way down to the full torus in order to avoid a false convergence, (2) a reasonably accurate simulation can be performed if the correct wedge number N can be identified, (3) the validity of a wedge simulation may be checked by performing a wave-number spectral analysis of the turbulence amplitude |δΦ| and assuring that the variation of δΦ between the discrete kθ values is less than 25% compared to the peak |δΦ|, and (4) a frequency spectrum may not be used for the validity check of a wedge simulation.« less

  5. What happens to full-f gyrokinetic transport and turbulence in a toroidal wedge simulation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyuho; Chang, C. S.; Seo, Janghoon

    Here, in order to save the computing time or to fit the simulation size into a limited computing hardware in a gyrokinetic turbulence simulation of a tokamak plasma, a toroidal wedge simulation may be utilized in which only a partial toroidal section is modeled with a periodic boundary condition in the toroidal direction. The most severe restriction in the wedge simulation is expected to be in the longest wavelength turbulence, i.e., ion temperature gradient (ITG) driven turbulence. The global full-f gyrokinetic code XGC1 is used to compare the transport and turbulence properties from a toroidal wedge simulation against the fullmore » torus simulation in an ITG unstable plasma in a model toroidal geometry. It is found that (1) the convergence study in the wedge number needs to be conducted all the way down to the full torus in order to avoid a false convergence, (2) a reasonably accurate simulation can be performed if the correct wedge number N can be identified, (3) the validity of a wedge simulation may be checked by performing a wave-number spectral analysis of the turbulence amplitude |δΦ| and assuring that the variation of δΦ between the discrete kθ values is less than 25% compared to the peak |δΦ|, and (4) a frequency spectrum may not be used for the validity check of a wedge simulation.« less

  6. Fortran interface layer of the framework for developing particle simulator FDPS

    NASA Astrophysics Data System (ADS)

    Namekata, Daisuke; Iwasawa, Masaki; Nitadori, Keigo; Tanikawa, Ataru; Muranushi, Takayuki; Wang, Long; Hosono, Natsuki; Nomura, Kentaro; Makino, Junichiro

    2018-06-01

    Numerical simulations based on particle methods have been widely used in various fields including astrophysics. To date, various versions of simulation software have been developed by individual researchers or research groups in each field, through a huge amount of time and effort, even though the numerical algorithms used are very similar. To improve the situation, we have developed a framework, called FDPS (Framework for Developing Particle Simulators), which enables researchers to develop massively parallel particle simulation codes for arbitrary particle methods easily. Until version 3.0, FDPS provided an API (application programming interface) for the C++ programming language only. This limitation comes from the fact that FDPS is developed using the template feature in C++, which is essential to support arbitrary data types of particle. However, there are many researchers who use Fortran to develop their codes. Thus, the previous versions of FDPS require such people to invest much time to learn C++. This is inefficient. To cope with this problem, we developed a Fortran interface layer in FDPS, which provides API for Fortran. In order to support arbitrary data types of particle in Fortran, we design the Fortran interface layer as follows. Based on a given derived data type in Fortran representing particle, a PYTHON script provided by us automatically generates a library that manipulates the C++ core part of FDPS. This library is seen as a Fortran module providing an API of FDPS from the Fortran side and uses C programs internally to interoperate Fortran with C++. In this way, we have overcome several technical issues when emulating a `template' in Fortran. Using the Fortran interface, users can develop all parts of their codes in Fortran. We show that the overhead of the Fortran interface part is sufficiently small and a code written in Fortran shows a performance practically identical to the one written in C++.

  7. BCM-2.0 - The new version of computer code ;Basic Channeling with Mathematica©;

    NASA Astrophysics Data System (ADS)

    Abdrashitov, S. V.; Bogdanov, O. V.; Korotchenko, K. B.; Pivovarov, Yu. L.; Rozhkova, E. I.; Tukhfatullin, T. A.; Eikhorn, Yu. L.

    2017-07-01

    The new symbolic-numerical code devoted to investigation of the channeling phenomena in periodic potential of a crystal has been developed. The code has been written in Wolfram Language taking advantage of analytical programming method. Newly developed different packages were successfully applied to simulate scattering, radiation, electron-positron pair production and other effects connected with channeling of relativistic particles in aligned crystal. The result of the simulation has been validated against data from channeling experiments carried out at SAGA LS.

  8. GW100: Benchmarking G0W0 for Molecular Systems.

    PubMed

    van Setten, Michiel J; Caruso, Fabio; Sharifzadeh, Sahar; Ren, Xinguo; Scheffler, Matthias; Liu, Fang; Lischner, Johannes; Lin, Lin; Deslippe, Jack R; Louie, Steven G; Yang, Chao; Weigend, Florian; Neaton, Jeffrey B; Evers, Ferdinand; Rinke, Patrick

    2015-12-08

    We present the GW100 set. GW100 is a benchmark set of the ionization potentials and electron affinities of 100 molecules computed with the GW method using three independent GW codes and different GW methodologies. The quasi-particle energies of the highest-occupied molecular orbitals (HOMO) and lowest-unoccupied molecular orbitals (LUMO) are calculated for the GW100 set at the G0W0@PBE level using the software packages TURBOMOLE, FHI-aims, and BerkeleyGW. The use of these three codes allows for a quantitative comparison of the type of basis set (plane wave or local orbital) and handling of unoccupied states, the treatment of core and valence electrons (all electron or pseudopotentials), the treatment of the frequency dependence of the self-energy (full frequency or more approximate plasmon-pole models), and the algorithm for solving the quasi-particle equation. Primary results include reference values for future benchmarks, best practices for convergence within a particular approach, and average error bars for the most common approximations.

  9. Spatially-Dependent Modelling of Pulsar Wind Nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-03-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multi-zone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially-dependent B-field, spatially-dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  10. Spatially dependent modelling of pulsar wind nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-07-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multizone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially dependent B-field, spatially dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  11. Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas

    NASA Astrophysics Data System (ADS)

    Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.

  12. Improvement of Mishchenko's T-matrix code for absorbing particles.

    PubMed

    Moroz, Alexander

    2005-06-10

    The use of Gaussian elimination with backsubstitution for matrix inversion in scattering theories is discussed. Within the framework of the T-matrix method (the state-of-the-art code by Mishchenko is freely available at http://www.giss.nasa.gov/-crmim), it is shown that the domain of applicability of Mishchenko's FORTRAN 77 (F77) code can be substantially expanded in the direction of strongly absorbing particles where the current code fails to converge. Such an extension is especially important if the code is to be used in nanoplasmonic or nanophotonic applications involving metallic particles. At the same time, convergence can also be achieved for large nonabsorbing particles, in which case the non-Numerical Algorithms Group option of Mishchenko's code diverges. Computer F77 implementation of Mishchenko's code supplemented with Gaussian elimination with backsubstitution is freely available at http://www.wave-scattering.com.

  13. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  14. A new Monte Carlo code for light transport in biological tissue.

    PubMed

    Torres-García, Eugenio; Oros-Pantoja, Rigoberto; Aranda-Lara, Liliana; Vieyra-Reyes, Patricia

    2018-04-01

    The aim of this work was to develop an event-by-event Monte Carlo code for light transport (called MCLTmx) to identify and quantify ballistic, diffuse, and absorbed photons, as well as their interaction coordinates inside the biological tissue. The mean free path length was computed between two interactions for scattering or absorption processes, and if necessary scatter angles were calculated, until the photon disappeared or went out of region of interest. A three-layer array (air-tissue-air) was used, forming a semi-infinite sandwich. The light source was placed at (0,0,0), emitting towards (0,0,1). The input data were: refractive indices, target thickness (0.02, 0.05, 0.1, 0.5, and 1 cm), number of particle histories, and λ from which the code calculated: anisotropy, scattering, and absorption coefficients. Validation presents differences less than 0.1% compared with that reported in the literature. The MCLTmx code discriminates between ballistic and diffuse photons, and inside of biological tissue, it calculates: specular reflection, diffuse reflection, ballistics transmission, diffuse transmission and absorption, and all parameters dependent on wavelength and thickness. The MCLTmx code can be useful for light transport inside any medium by changing the parameters that describe the new medium: anisotropy, dispersion and attenuation coefficients, and refractive indices for specific wavelength.

  15. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  16. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    PubMed

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  17. Strong scaling of general-purpose molecular dynamics simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Nguyen, Trung Dac; Anderson, Joshua A.; Lui, Pak; Spiga, Filippo; Millan, Jaime A.; Morse, David C.; Glotzer, Sharon C.

    2015-07-01

    We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5 ×.

  18. Experimental Study and Analytical Methods for Particle Bed Dryout With Heterogeneous Particles and Pressure Variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miettinen, Jaakko; Sairanen, Risto; Lindholm, Ilona

    2002-07-01

    The interest to study the dryout heat flux in particle beds is related to interest of quantify the debris coolability margins during a hypothetical severe reactor accident. When the molten core has relocated to the containment floor, one accident management concept is based on the cooling of the corium by the water injection on top. Earlier experimental and analytical work has concentrated on homogeneous particle beds at atmospheric pressures. For plant safety assessment in Finland, there is a need to consider heterogeneous particle mixtures, layered particle bed setups and varied pressures. A facility has been constructed at VTT to measuremore » dryout heat flux in a heterogeneous particle bed. The bed dimensions are 0.3 m in diameter and 0.6 m in height, with a mixture of 0.1 to 10 mm particles. The facility has a pressure range from atmospheric to 6 bar (overpressure). The bed is heated by spirals of a resistance band. The preliminary experiments have been carried out, but a more systematic set of data is expected to be available in the spring 2002. To support the experiments analytical models have been developed for qualification of the experimental results. The first comparison is done against various critical heat flux correlations developed in 1980's and 1990's for homogeneous bed conditions. The second comparison is done against 1-D and 0-D models developed by Lipinski. The most detailed analysis of the transient process conditions and dryout predictions are done by using the two-dimensional, drift-flux based thermohydraulic solution for the particle bed immersed into the water. The code is called PILEXP. Already the first validation results against the preliminary tests indicate that the transient process conditions and the mechanisms related to the dryout can be best explained and understood by using a multidimensional, transient code, where all details of the process control can be modeled as well. The heterogeneous bed and stratified bed can not be well considered by single critical heat flux correlations. (authors)« less

  19. Simulation of 0.3 MWt AFBC test rig burning Turkish lignites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Selcuk, N.; Degirmenci, E.; Oymak, O.

    1997-12-31

    A system model coupling bed and freeboard models for continuous combustion of lignite particles of wide size distribution burning in their own ash in a fluidized bed combustor was modified to incorporate: (1) a procedure for faster computation of particle size distributions (PSDs) without any sacrifice in accuracy; (2) energy balance on char particles for the determination of variation of temperature with particle size, (3) plug flow assumption for the interstitial gas. An efficient and accurate computer code developed for the solution of the conservation equations for energy and chemical species was applied to the prediction of the behavior ofmore » a 0.3 MWt AFBC test rig burning low quality Turkish lignites. The construction and operation of the test rig was carried out within the scope of a cooperation agreement between Middle East Technical University (METU) and Babcock and Wilcox GAMA (BWG) under the auspices of Canadian International Development Agency (CIDA). Predicted concentration and temperature profiles and particle size distributions of solid streams were compared with measured data and found to be in reasonable agreement. The computer code replaces the conventional numerical integration of the analytical solution of population balance with direct integration in ODE form by using a powerful integrator LSODE (Livermore Solver for Ordinary Differential Equations) resulting in two orders of magnitude decrease in CPU (Central Processing Unit) time.« less

  20. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  1. Ozone - Current Air Quality Index

    MedlinePlus

    GO! Local Air Quality Conditions Zip Code: State : My Current Location Current AQI Forecast AQI Loop More Maps AQI: Good (0 - 50) ... resources for Hawaii residents and visitors more announcements Air Quality Basics Air Quality Index | Ozone | Particle Pollution | Smoke ...

  2. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  3. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  4. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  5. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE PAGES

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    2017-05-17

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  6. OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon

    2010-10-01

    Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.

  7. Simulations of toroidal Alfvén eigenmode excited by fast ions on the Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan

    2018-05-01

    Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, H.R.

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  9. Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1999-01-01

    Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.

  10. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  11. Chemically Reacting One-Dimensional Gas-Particle Flows

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Penny, M. M.

    1975-01-01

    The governing equations for the one-dimensional flow of a gas-particle system are discussed. Gas-particle effects are coupled via the system momentum and energy equations with the gas assumed to be chemically frozen or in chemical equilibrium. A computer code for calculating the one-dimensional flow of a gas-particle system is discussed and a user's input guide presented. The computer code provides for the expansion of the gas-particle system from a specified starting velocity and nozzle inlet geometry. Though general in nature, the final output of the code is a startline for initiating the solution of a supersonic gas-particle system in rocket nozzles. The startline includes gasdynamic data defining gaseous startline points from the nozzle centerline to the nozzle wall and particle properties at points along the gaseous startline.

  12. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Hales; R. L. Williamson; S. R. Novascone

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less

  13. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  14. CUBE: Information-optimized parallel cosmological N-body simulation code

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin

    2018-05-01

    CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

  15. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less

  16. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.

  17. Applications of the microdosimetric function implemented in the macroscopic particle transport simulation code PHITS.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji

    2012-01-01

    Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.

  18. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.

  19. 3D Multispecies Nonlinear Perturbative Particle Simulation of Intense Nonneutral Particle Beams (Research supported by the Department of Energy and the Short Pulse Spallation Source Project and LANSCE Division of LANL.)

    NASA Astrophysics Data System (ADS)

    Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li

    1999-11-01

    The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.

  20. Risk of pneumonia in obstructive lung disease: A real-life study comparing extra-fine and fine-particle inhaled corticosteroids.

    PubMed

    Sonnappa, Samatha; Martin, Richard; Israel, Elliot; Postma, Dirkje; van Aalderen, Wim; Burden, Annie; Usmani, Omar S; Price, David B

    2017-01-01

    Regular use of inhaled corticosteroids (ICS) in patients with obstructive lung diseases has been associated with a higher risk of pneumonia, particularly in COPD. The risk of pneumonia has not been previously evaluated in relation to ICS particle size and dose used. Historical cohort, UK database study of 23,013 patients with obstructive lung disease aged 12-80 years prescribed extra-fine or fine-particle ICS. The endpoints assessed during the outcome year were diagnosis of pneumonia, acute exacerbations and acute respiratory events in relation to ICS dose. To determine the association between ICS particle size, dose and risk of pneumonia in unmatched and matched treatment groups, logistic and conditional logistic regression models were used. 14788 patients were stepped-up to fine-particle ICS and 8225 to extra-fine ICS. On unmatched analysis, patients stepping-up to extra-fine ICS were significantly less likely to be coded for pneumonia (adjusted odds ratio [aOR] 0.60; 95% CI 0.37, 0.97]); experience acute exacerbations (adjusted risk ratio [aRR] 0.91; 95%CI 0.85, 0.97); and acute respiratory events (aRR 0.90; 95%CI 0.86, 0.94) compared with patients stepping-up to fine-particle ICS. Patients prescribed daily ICS doses in excess of 700 mcg (fluticasone propionate equivalent) had a significantly higher risk of pneumonia (OR [95%CI] 2.38 [1.17, 4.83]) compared with patients prescribed lower doses, irrespective of particle size. These findings suggest that patients with obstructive lung disease on extra-fine particle ICS have a lower risk of pneumonia than those on fine-particle ICS, with those receiving higher ICS doses being at a greater risk.

  1. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  2. First ERO2.0 modeling of Be erosion and non-local transport in JET ITER-like wall

    NASA Astrophysics Data System (ADS)

    Romazanov, J.; Borodin, D.; Kirschner, A.; Brezinsek, S.; Silburn, S.; Huber, A.; Huber, V.; Bufferand, H.; Firdaouss, M.; Brömmel, D.; Steinbusch, B.; Gibbon, P.; Lasa, A.; Borodkina, I.; Eksaeva, A.; Linsmeier, Ch; Contributors, JET

    2017-12-01

    ERO is a Monte-Carlo code for modeling plasma-wall interaction and 3D plasma impurity transport for applications in fusion research. The code has undergone a significant upgrade (ERO2.0) which allows increasing the simulation volume in order to cover the entire plasma edge of a fusion device, allowing a more self-consistent treatment of impurity transport and comparison with a larger number and variety of experimental diagnostics. In this contribution, the physics-relevant technical innovations of the new code version are described and discussed. The new capabilities of the code are demonstrated by modeling of beryllium (Be) erosion of the main wall during JET limiter discharges. Results for erosion patterns along the limiter surfaces and global Be transport including incident particle distributions are presented. A novel synthetic diagnostic, which mimics experimental wide-angle 2D camera images, is presented and used for validating various aspects of the code, including erosion, magnetic shadowing, non-local impurity transport, and light emission simulation.

  3. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  4. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.

    Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated which was matched with the experimental one. Themore » dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~ 2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7 to 40 eV).« less

  6. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  7. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  8. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  9. Smoothed Particle Hydrodynamic Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  10. Parcels v0.9: prototyping a Lagrangian ocean analysis framework for the petascale age

    NASA Astrophysics Data System (ADS)

    Lange, Michael; van Sebille, Erik

    2017-11-01

    As ocean general circulation models (OGCMs) move into the petascale age, where the output of single simulations exceeds petabytes of storage space, tools to analyse the output of these models will need to scale up too. Lagrangian ocean analysis, where virtual particles are tracked through hydrodynamic fields, is an increasingly popular way to analyse OGCM output, by mapping pathways and connectivity of biotic and abiotic particulates. However, the current software stack of Lagrangian ocean analysis codes is not dynamic enough to cope with the increasing complexity, scale and need for customization of use-cases. Furthermore, most community codes are developed for stand-alone use, making it a nontrivial task to integrate virtual particles at runtime of the OGCM. Here, we introduce the new Parcels code, which was designed from the ground up to be sufficiently scalable to cope with petascale computing. We highlight its API design that combines flexibility and customization with the ability to optimize for HPC workflows, following the paradigm of domain-specific languages. Parcels is primarily written in Python, utilizing the wide range of tools available in the scientific Python ecosystem, while generating low-level C code and using just-in-time compilation for performance-critical computation. We show a worked-out example of its API, and validate the accuracy of the code against seven idealized test cases. This version 0.9 of Parcels is focused on laying out the API, with future work concentrating on support for curvilinear grids, optimization, efficiency and at-runtime coupling with OGCMs.

  11. In vitro cell irradiation systems based on 210Po alpha source: construction and characterisation

    NASA Technical Reports Server (NTRS)

    Szabo, J.; Feher, I.; Palfalvi, J.; Balashazy, I.; Dam, A. M.; Polonyi, I.; Bogdandi, E. N.

    2002-01-01

    One way of studying the risk to human health of low-level radiation exposure is to make biological experiments on living cell cultures. Two 210Po alpha-particle emitting devices, with 0.5 and 100 MBq activity, were designed and constructed to perform such experiments irradiating monolayers of cells. Estimates of dose rate at the cell surface were obtained from measurements by a PIPS alpha-particle spectrometer and from calculations by the SRIM 2000, Monte Carlo charged particle transport code. Particle fluence area distributions were measured by solid state nuclear track detectors. The design and dosimetric characterisation of the devices are discussed. c2002 Elsevier Science Ltd. All rights reserved.

  12. Ultra-high-energy cosmic rays from low-luminosity active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Duţan, Ioana; Caramete, Laurenţiu I.

    2015-03-01

    We investigate the production of ultra-high-energy cosmic ray (UHECR) in relativistic jets from low-luminosity active galactic nuclei (LLAGN). We start by proposing a model for the UHECR contribution from the black holes (BHs) in LLAGN, which present a jet power Pj ⩽1046 erg s-1. This is in contrast to the opinion that only high-luminosity AGN can accelerate particles to energies ⩾ 50 EeV. We rewrite the equations which describe the synchrotron self-absorbed emission of a non-thermal particle distribution to obtain the observed radio flux density from sources with a flat-spectrum core and its relationship to the jet power. We found that the UHECR flux is dependent on the observed radio flux density, the distance to the AGN, and the BH mass, where the particle acceleration regions can be sustained by the magnetic energy extraction from the BH at the center of the AGN. We use a complete sample of 29 radio sources with a total flux density at 5 GHz greater than 0.5 Jy to make predictions for the maximum particle energy, luminosity, and flux of the UHECRs from nearby AGN. These predictions are then used in a semi-analytical code developed in Mathematica (SAM code) as inputs for the Monte-Carlo simulations to obtain the distribution of the arrival direction at the Earth and the energy spectrum of the UHECRs, taking into account their deflection in the intergalactic magnetic fields. For comparison, we also use the CRPropa code with the same initial conditions as for the SAM code. Importantly, to calculate the energy spectrum we also include the weighting of the UHECR flux per each UHECR source. Next, we compare the energy spectrum of the UHECRs with that obtained by the Pierre Auger Observatory.

  13. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  14. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  15. Performance tuning of N-body codes on modern microprocessors: I. Direct integration with a hermite scheme on x86_64 architecture

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro; Hut, Piet

    2006-12-01

    The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.

  16. DYNECHARM++: a toolkit to simulate coherent interactions of high-energy charged particles in complex structures

    NASA Astrophysics Data System (ADS)

    Bagli, Enrico; Guidi, Vincenzo

    2013-08-01

    A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.

  17. iQIST v0.7: An open source continuous-time quantum Monte Carlo impurity solver toolkit

    NASA Astrophysics Data System (ADS)

    Huang, Li

    2017-12-01

    In this paper, we present a new version of the iQIST software package, which is capable of solving various quantum impurity models by using the hybridization expansion (or strong coupling expansion) continuous-time quantum Monte Carlo algorithm. In the revised version, the software architecture is completely redesigned. New basis (intermediate representation or singular value decomposition representation) for the single-particle and two-particle Green's functions is introduced. A lot of useful physical observables are added, such as the charge susceptibility, fidelity susceptibility, Binder cumulant, and autocorrelation time. Especially, we optimize measurement for the two-particle Green's functions. Both the particle-hole and particle-particle channels are supported. In addition, the block structure of the two-particle Green's functions is exploited to accelerate the calculation. Finally, we fix some known bugs and limitations. The computational efficiency of the code is greatly enhanced.

  18. ZENO: N-body and SPH Simulation Codes

    NASA Astrophysics Data System (ADS)

    Barnes, Joshua E.

    2011-02-01

    The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

  19. nIFTY galaxy cluster simulations - III. The similarity and diversity of galaxies and subhaloes

    NASA Astrophysics Data System (ADS)

    Elahi, Pascal J.; Knebe, Alexander; Pearce, Frazer R.; Power, Chris; Yepes, Gustavo; Cui, Weiguang; Cunnama, Daniel; Kay, Scott T.; Sembolini, Federico; Beck, Alexander M.; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Teyssier, Romain

    2016-05-01

    We examine subhaloes and galaxies residing in a simulated Λ cold dark matter galaxy cluster (M^crit_{200}=1.1× 10^{15} h^{-1} M_{⊙}) produced by hydrodynamical codes ranging from classic smooth particle hydrodynamics (SPH), newer SPH codes, adaptive and moving mesh codes. These codes use subgrid models to capture galaxy formation physics. We compare how well these codes reproduce the same subhaloes/galaxies in gravity-only, non-radiative hydrodynamics and full feedback physics runs by looking at the overall subhalo/galaxy distribution and on an individual object basis. We find that the subhalo population is reproduced to within ≲10 per cent for both dark matter only and non-radiative runs, with individual objects showing code-to-code scatter of ≲0.1 dex, although the gas in non-radiative simulations shows significant scatter. Including feedback physics significantly increases the diversity. Subhalo mass and Vmax distributions vary by ≈20 per cent. The galaxy populations also show striking code-to-code variations. Although the Tully-Fisher relation is similar in almost all codes, the number of galaxies with 109 h- 1 M⊙ ≲ M* ≲ 1012 h- 1 M⊙ can differ by a factor of 4. Individual galaxies show code-to-code scatter of ˜0.5 dex in stellar mass. Moreover, systematic differences exist, with some codes producing galaxies 70 per cent smaller than others. The diversity partially arises from the inclusion/absence of active galactic nucleus feedback. Our results combined with our companion papers demonstrate that subgrid physics is not just subject to fine-tuning, but the complexity of building galaxies in all environments remains a challenge. We argue that even basic galaxy properties, such as stellar mass to halo mass, should be treated with errors bars of ˜0.2-0.4 dex.

  20. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    DTIC Science & Technology

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of

  1. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  2. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  3. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less

  5. CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies

    NASA Astrophysics Data System (ADS)

    Delzanno, G.; Camporeale, E.; Moulton, J. D.; Borovsky, J. E.; MacDonald, E.; Thomsen, M. F.

    2012-12-01

    We present a recently developed Particle-In-Cell (PIC) code in curvilinear geometry called CPIC (Curvilinear PIC) [1], where the standard PIC algorithm is coupled with a grid generation/adaptation strategy. Through the grid generator, which maps the physical domain to a logical domain where the grid is uniform and Cartesian, the code can simulate domains of arbitrary complexity, including the interaction of complex objects with a plasma. At present the code is electrostatic. Poisson's equation (in logical space) can be solved with either an iterative method based on the Conjugate Gradient (CG) or the Generalized Minimal Residual (GMRES) coupled with a multigrid solver used as a preconditioner, or directly with multigrid. The multigrid strategy is critical for the solver to perform optimally or nearly optimally as the dimension of the problem increases. CPIC also features a hybrid particle mover, where the computational particles are characterized by position in logical space and velocity in physical space. The advantage of a hybrid mover, as opposed to more conventional movers that move particles directly in the physical space, is that the interpolation of the particles in logical space is straightforward and computationally inexpensive, since one does not have to track the position of the particle. We will present our latest progress on the development of the code and document the code performance on standard plasma-physics tests. Then we will present the (preliminary) application of the code to a basic dynamic-charging problem, namely the charging and shielding of a spherical spacecraft in a magnetized plasma for various level of magnetization and including the pulsed emission of an electron beam from the spacecraft. The dynamical evolution of the sheath and the time-dependent current collection will be described. This study is in support of the ConnEx mission concept to use an electron beam from a magnetospheric spacecraft to trace magnetic field lines from the magnetosphere to the ionosphere [2]. [1] G.L. Delzanno, E. Camporeale, "CPIC: a new Particle-in-Cell code for plasma-material interaction studies", in preparation (2012). [2] J.E. Borovsky, D.J. McComas, M.F. Thomsen, J.L. Burch, J. Cravens, C.J. Pollock, T.E. Moore, and S.B. Mende, "Magnetosphere-Ionosphere Observatory (MIO): A multisatellite mission designed to solve the problem of what generates auroral arcs," Eos. Trans. Amer. Geophys. Union 79 (45), F744 (2000).

  6. Particle In Cell Codes on Highly Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  7. Further Studies of the NRL Collective Particle Accelerator VIA Numerical Modeling with the MAGIC Code.

    DTIC Science & Technology

    1984-08-01

    COLLFCTIVF PAPTTCLE ACCELERATOR VIA NUMERICAL MODFLINC WITH THF MAGIC CODE Robert 1. Darker Auqust 19F4 Final Report for Period I April. qI84 - 30...NUMERICAL MODELING WITH THE MAGIC CODE Robert 3. Barker August 1984 Final Report for Period 1 April 1984 - 30 September 1984 Prepared for: Scientific...Collective Final Report Particle Accelerator VIA Numerical Modeling with April 1 - September-30, 1984 MAGIC Code. 6. PERFORMING ORG. REPORT NUMBER MRC/WDC-R

  8. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  9. Uranus' cloud structure and scattering particle properties from IRTF SpeX observations

    NASA Astrophysics Data System (ADS)

    Tice, D. S.; Irwin, P. G. J.; Fletcher, L. N.; Teanby, N. A.; Orton, G. S.; Davis, G. R.

    2011-10-01

    Observations of Uranus were made in August 2009 with the SpeX spectrograph at the NASA Infrared Telescope Facility (IRTF). Analysed spectra range from 0.8 to 1.8 μm at a spatial resolution of 0.5" and a spectral resolution of R = 1,200. Spectra from 0.818 to 0.834 μm, a region characterised by both strong hydrogen quadrupole and methane absorptions are considered to determine methane content. Evidence indicates that methane abundance varies with latitude. NEMESIS, an optimal estimation retrieval code with full-scattering capability, is employed to analyse the full range of data. Cloud and haze properties in the upper troposphere and stratosphere are characterised, and are consistent with other current literature. New information on single scattering albedos and particle size distributions are inferred.

  10. Scattering and Absorption Properties of Polydisperse Wavelength-sized Particles Covered with Much Smaller Grains

    NASA Technical Reports Server (NTRS)

    Dlugach, Jana M.; Mishchenko, Michael I.; Mackowski, Daniel W.

    2012-01-01

    Using the results of direct, numerically exact computer solutions of the Maxwell equations, we analyze scattering and absorption characteristics of polydisperse compound particles in the form of wavelength-sized spheres covered with a large number of much smaller spherical grains.The results pertain to the complex refractive indices1.55 + i0.0003,1.55 + i0.3, and 3 + i0.1. We show that the optical effects of dusting wavelength-sized hosts by microscopic grains can vary depending on the number and size of the grains as well as on the complex refractive index. Our computations also demonstrate the high efficiency of the new superposition T-matrix code developed for use on distributed memory computer clusters.

  11. A method for determining electrophoretic and electroosmotic mobilities using AC and DC electric field particle displacements.

    PubMed

    Oddy, M H; Santiago, J G

    2004-01-01

    We have developed a method for measuring the electrophoretic mobility of submicrometer, fluorescently labeled particles and the electroosmotic mobility of a microchannel. We derive explicit expressions for the unknown electrophoretic and the electroosmotic mobilities as a function of particle displacements resulting from alternating current (AC) and direct current (DC) applied electric fields. Images of particle displacements are captured using an epifluorescent microscope and a CCD camera. A custom image-processing code was developed to determine image streak lengths associated with AC measurements, and a custom particle tracking velocimetry (PTV) code was devised to determine DC particle displacements. Statistical analysis was applied to relate mobility estimates to measured particle displacement distributions.

  12. Version 4.0 of code Java for 3D simulation of the CCA model

    NASA Astrophysics Data System (ADS)

    Fan, Linyu; Liao, Jianwei; Zuo, Junsen; Zhang, Kebo; Li, Chao; Xiong, Hailing

    2018-07-01

    This paper presents a new version Java code for the three-dimensional simulation of Cluster-Cluster Aggregation (CCA) model to replace the previous version. Many redundant traverses of clusters-list in the program were totally avoided, so that the consumed simulation time is significantly reduced. In order to show the aggregation process in a more intuitive way, we have labeled different clusters with varied colors. Besides, a new function is added for outputting the particle's coordinates of aggregates in file to benefit coupling our model with other models.

  13. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  14. Modeling anomalous radial transport in kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2009-11-01

    Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.

  15. A study of sedimentation and aggregation of volcanic particles based on experiments carried out with a vertical wind tunnel

    NASA Astrophysics Data System (ADS)

    Bagheri, G.; Bonadonna, C.; Manzella, I.; Pontelandolfo, P.; Haas, P.

    2012-12-01

    A complete understanding and parameterization of both particle sedimentation and particle aggregation require systematic and detailed laboratory investigations performed in controlled conditions. For this purpose, a dedicated 4-meter-high vertical wind tunnel has been designed and constructed at the University of Geneva in collaboration with the Groupe de compétence en mécanique des fluides et procédés énergétiques (CMEFE). Final design is a result of Computational Fluid Dynamics simulations combined with laboratory tests. With its diverging test section, the tunnel is designed to suspend particles of different shapes and sizes in order to study the aero-dynamical behavior of volcanic particles and their collision and aggregation. In current set-up, velocities between 5.0 to 27 ms-1 can be obtained, which correspond to typical volcanic particles with diameters between 10 to 40 mm. A combination of Particle Tracking Velocimetry (PTV) and statistical methods is used to derive particle terminal velocity. The method is validated using smooth spherical particles with known drag coefficient. More than 120 particles of different shapes (i.e. spherical, regular and volcanic) and compositions are 3D-scanned and almost 1 million images of their suspension in the test section of wind tunnel are recorded by a high speed camera and analyzed by a PTV code specially developed for the wind tunnel. Measured values of terminal velocity for tested particles are between 3.6 and 24.9 ms-1 which corresponds to Reynolds numbers between 8×103 and 1×105. In addition to the vertical wind tunnel, an apparatus with height varying between 0.5 and 3.5 m has been built to measure terminal velocity of micrometric particles in Reynolds number between 4 and 100. In these experiments, particles are released individually in the air at top of the apparatus and their terminal velocities are measured at the bottom of apparatus by a combination of high-speed camera imaging and PTV post-analyzing. Effects of shape, porosity and orientation of the particles on their terminal velocity are studied. Various shape factors are measured based on different methods, such as 3D-scanning, 2D-image processing, SEM image analysis, caliper measurements, pycnometer and buoyancy tests. Our preliminary experiments on non-smooth spherical particles and irregular particles reveal some interesting aspects. First, the effect of surface roughness and porosity is more important for spherical particles than for regular non-spherical and irregular particles. Second, results underline how, the aero-dynamical behavior of individual irregular particles is better characterized by a range of values of drag coefficients instead of a single value. Finally, since all the shape factors are calculated precisely for each individual particle, the resulted database can provide important information to benchmark and improve existing terminal-velocity models. Modifications of the wind tunnel, i.e. very low air speed (0.03-5.0 ms-1) for suspension of micrometric particles, and of the PTV code, i.e. multiple particle tracking and collision counting, have also been performed in combination to the installation of a particle charging device, a controlled humidifier and a high-power chiller (to reach values down to -20 °C) in order to investigate both wet and dry aggregation of volcanic particles.

  16. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.

  17. Use of Fluka to Create Dose Calculations

    NASA Technical Reports Server (NTRS)

    Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John

    2012-01-01

    Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.

  18. Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.

    2010-12-01

    The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.

  19. Relating quantum discord with the quantum dense coding capacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Qiu, Liang, E-mail: lqiu@cumt.edu.cn; Li, Song

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  20. Code C# for chaos analysis of relativistic many-body systems with reactions

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.

    2012-04-01

    In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Treatment of two particles reactions and decays. For each particle, calculation of the time measured in the particle reference frame, according to the instantaneous velocity. Possibility to dynamically add particle properties (spin, isospin, etc.), and reactions/decays, using a specific XML input file. Basic support for Monte Carlo simulations. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, “clusterization map”, and energy conservation precision test. As an example of use, we implemented a toy-model for nuclear relativistic collisions at 4.5 A GeV/c. Reasons for new version: Following our goal of applying chaos theory to nuclear relativistic collisions at 4.5 A GeV/c, we developed a reaction module integrated with the Chaos Many-Body Engine. In the previous version, inheriting the Particle class was the only possibility of implementing more particle properties (spin, isospin, and so on). In the new version, particle properties can be dynamically added using a dictionary object. The application was improved in order to calculate the time measured in the own reference frame of each particle. two particles reactions: a+b→c+d, decays: a→c+d, stimulated decays, more complicated schemas, implemented as various combinations of previous reactions. Following our goal of creating a flexible application, the reactions list, including the corresponding properties (cross sections, particles lifetime, etc.), could be supplied as parameter, using a specific XML configuration file. The simulation output files were modified for systems with reactions, assuring also the backward compatibility. We propose the “Clusterization Map” as a new investigation method of many-body systems. The multi-dimensional Lyapunov Exponent was adapted in order to be used for systems with variable structure. Basic support for Monte Carlo simulations was also added. Additional comments: Windows forms application for testing the engine. Easy copy/paste based deployment method. Running time: Quadratic complexity.

  1. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE PAGES

    Vincenti, H.; Lobet, M.; Lehe, R.; ...

    2016-09-19

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  2. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Lobet, M.; Lehe, R.

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  3. The distribution of dark matter, galaxies, and the intergalactic medium in a cold dark matter dominated universe

    NASA Technical Reports Server (NTRS)

    Ryu, Dongsu; Vishniac, Ethan T.; Chiang, Wei-Hwan

    1988-01-01

    The evolution and distribution of galaxies and the intergalactic medium (IGM) have been studied, along with collisionless dark matter in a Universe dominated by cold dark matter. The Einstein-deSitter universe with omega sub 0 = 1 and h = 0.5 was considered (here h = H sub 0 bar 100/kms/Mpc and H sub 0 is the present value of the Hubble constant). It is assumed that initially dark matter composes 90 pct and baryonic matter composes 10 pct of total mass, and that the primordial baryonic matter is comprised of H and He, with the abundance of He equal to 10 pct of H by number. Galaxies are allowed to form out of the IGM, if the total density and baryonic density satisfy an overdensity criterion. Subsequently, the newly formed galaxies release 10 to the 60th ergs of energy into the IGM over a period of 10 to the 8th years. Calculations have been performed with 32 to the 3rd dark matter particles and 32 to the 3rd cells in a cube with comoving side length L = 9.6/h Mpc. Dark matter particles and galaxies have been followed with an N-body code, while the IGM has been followed with a fluid code.

  4. The distribution of dark matter, galaxies, and the intergalactic medium in a cold dark matter dominated universe

    NASA Astrophysics Data System (ADS)

    Ryu, Dongsu; Vishniac, Ethan T.; Chiang, Wei-Hwan

    1988-11-01

    The evolution and distribution of galaxies and the intergalactic medium (IGM) have been studied, along with collisionless dark matter in a Universe dominated by cold dark matter. The Einstein-deSitter universe with omega0 = 1 and h = 0.5 was considered (here h = H0 bar 100/kms/Mpc and H0 is the present value of the Hubble constant). It is assumed that initially dark matter composes 90 pct and baryonic matter composes 10 pct of total mass, and that the primordial baryonic matter is comprised of H and He, with the abundance of He equal to 10 pct of H by number. Galaxies are allowed to form out of the IGM, if the total density and baryonic density satisfy an overdensity criterion. Subsequently, the newly formed galaxies release 10 to the 60th ergs of energy into the IGM over a period of 10 to the 8th years. Calculations have been performed with 32 to the 3rd dark matter particles and 32 to the 3rd cells in a cube with comoving side length L = 9.6/h Mpc. Dark matter particles and galaxies have been followed with an N-body code, while the IGM has been followed with a fluid code.

  5. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  6. Colour-barcoded magnetic microparticles for multiplexed bioassays.

    PubMed

    Lee, Howon; Kim, Junhoi; Kim, Hyoki; Kim, Jiyun; Kwon, Sunghoon

    2010-09-01

    Encoded particles have a demonstrated value for multiplexed high-throughput bioassays such as drug discovery and clinical diagnostics. In diverse samples, the ability to use a large number of distinct identification codes on assay particles is important to increase throughput. Proper handling schemes are also needed to readout these codes on free-floating probe microparticles. Here we create vivid, free-floating structural coloured particles with multi-axis rotational control using a colour-tunable magnetic material and a new printing method. Our colour-barcoded magnetic microparticles offer a coding capacity easily into the billions with distinct magnetic handling capabilities including active positioning for code readouts and active stirring for improved reaction kinetics in microscale environments. A DNA hybridization assay is done using the colour-barcoded magnetic microparticles to demonstrate multiplexing capabilities.

  7. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  8. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  9. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  10. Assessment and Requirements of Nuclear Reaction Databases for GCR Transport in the Atmosphere and Structures

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.

    1998-01-01

    The transport properties of galactic cosmic rays (GCR) in the atmosphere, material structures, and human body (self-shielding) am of interest in risk assessment for supersonic and subsonic aircraft and for space travel in low-Earth orbit and on interplanetary missions. Nuclear reactions, such as knockout and fragmentation, present large modifications of particle type and energies of the galactic cosmic rays in penetrating materials. We make an assessment of the current nuclear reaction models and improvements in these model for developing required transport code data bases. A new fragmentation data base (QMSFRG) based on microscopic models is compared to the NUCFRG2 model and implications for shield assessment made using the HZETRN radiation transport code. For deep penetration problems, the build-up of light particles, such as nucleons, light clusters and mesons from nuclear reactions in conjunction with the absorption of the heavy ions, leads to the dominance of the charge Z = 0, 1, and 2 hadrons in the exposures at large penetration depths. Light particles are produced through nuclear or cluster knockout and in evaporation events with characteristically distinct spectra which play unique roles in the build-up of secondary radiation's in shielding. We describe models of light particle production in nucleon and heavy ion induced reactions and make an assessment of the importance of light particle multiplicity and spectral parameters in these exposures.

  11. Rates for neutron-capture reactions on tungsten isotopes in iron meteorites. [Abstract only

    NASA Technical Reports Server (NTRS)

    Masarik, J.; Reedy, R. C.

    1994-01-01

    High-precision W isotopic analyses by Harper and Jacobsen indicate the W-182/W-183 ratio in the Toluca iron meteorite is shifted by -(3.0 +/- 0.9) x 10(exp -4) relative to a terrestrial standard. Possible causes of this shift are neutron-capture reactions on W during Toluca's approximately 600-Ma exposure to cosmic ray particles or radiogenic growth of W-182 from 9-Ma Hf-182 in the silicate portion of the Earth after removal of W to the Earth's core. Calculations for the rates of neutron-capture reactions on W isotopes were done to study the first possibility. The LAHET Code System (LCS) which consists of the Los Alamos High Energy Transport (LAHET) code and the Monte Carlo N-Particle(MCNP) transport code was used to numerically simulate the irradiation of the Toluca iron meteorite by galactic-cosmic-ray (GCR) particles and to calculate the rates of W(n, gamma) reactions. Toluca was modeled as a 3.9-m-radius sphere with the composition of a typical IA iron meteorite. The incident GCR protons and their interactions were modeled with LAHET, which also handled the interactions of neutrons with energies above 20 MeV. The rates for the capture of neutrons by W-182, W-183, and W-186 were calculated using the detailed library of (n, gamma) cross sections in MCNP. For this study of the possible effect of W(n, gamma) reactions on W isotope systematics, we consider the peak rates. The calculated maximum change in the normalized W-182/W-183 ratio due to neutron-capture reactions cannot account for more than 25% of the mass 182 deficit observed in Toluca W.

  12. LIGKA: A linear gyrokinetic code for the description of background kinetic and fast particle effects on the MHD stability in tokamaks

    NASA Astrophysics Data System (ADS)

    Lauber, Ph.; Günter, S.; Könies, A.; Pinches, S. D.

    2007-09-01

    In a plasma with a population of super-thermal particles generated by heating or fusion processes, kinetic effects can lead to the additional destabilisation of MHD modes or even to additional energetic particle modes. In order to describe these modes, a new linear gyrokinetic MHD code has been developed and tested, LIGKA (linear gyrokinetic shear Alfvén physics) [Ph. Lauber, Linear gyrokinetic description of fast particle effects on the MHD stability in tokamaks, Ph.D. Thesis, TU München, 2003; Ph. Lauber, S. Günter, S.D. Pinches, Phys. Plasmas 12 (2005) 122501], based on a gyrokinetic model [H. Qin, Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks, Ph.D. Thesis, Princeton University, 1998]. A finite Larmor radius expansion together with the construction of some fluid moments and specification to the shear Alfvén regime results in a self-consistent, electromagnetic, non-perturbative model, that allows not only for growing or damped eigenvalues but also for a change in mode-structure of the magnetic perturbation due to the energetic particles and background kinetic effects. Compared to previous implementations [H. Qin, mentioned above], this model is coded in a more general and comprehensive way. LIGKA uses a Fourier decomposition in the poloidal coordinate and a finite element discretisation in the radial direction. Both analytical and numerical equilibria can be treated. Integration over the unperturbed particle orbits is performed with the drift-kinetic HAGIS code [S.D. Pinches, Ph.D. Thesis, The University of Nottingham, 1996; S.D. Pinches et al., CPC 111 (1998) 131] which accurately describes the particles' trajectories. This allows finite-banana-width effects to be implemented in a rigorous way since the linear formulation of the model allows the exchange of the unperturbed orbit integration and the discretisation of the perturbed potentials in the radial direction. Successful benchmarks for toroidal Alfvén eigenmodes (TAEs) and kinetic Alfvén waves (KAWs) with analytical results, ideal MHD codes, drift-kinetic codes and other codes based on kinetic models are reported.

  13. Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2000-01-01

    An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.

  14. Activation of accelerator construction materials by heavy ions

    NASA Astrophysics Data System (ADS)

    Katrík, P.; Mustafin, E.; Hoffmann, D. H. H.; Pavlovič, M.; Strašík, I.

    2015-12-01

    Activation data for an aluminum target irradiated by 200 MeV/u 238U ion beam are presented in the paper. The target was irradiated in the stacked-foil geometry and analyzed using gamma-ray spectroscopy. The purpose of the experiment was to study the role of primary particles, projectile fragments, and target fragments in the activation process using the depth profiling of residual activity. The study brought information on which particles contribute dominantly to the target activation. The experimental data were compared with the Monte Carlo simulations by the FLUKA 2011.2c.0 code. This study is a part of a research program devoted to activation of accelerator construction materials by high-energy (⩾200 MeV/u) heavy ions at GSI Darmstadt. The experimental data are needed to validate the computer codes used for simulation of interaction of swift heavy ions with matter.

  15. Integrating Geochemical Reactions with a Particle-Tracking Approach to Simulate Nitrogen Transport and Transformation in Aquifers

    NASA Astrophysics Data System (ADS)

    Cui, Z.; Welty, C.; Maxwell, R. M.

    2011-12-01

    Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.

  16. A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromley, Benjamin C.; Kenyon, Scott J., E-mail: bromley@physics.utah.edu, E-mail: skenyon@cfa.harvard.edu

    2011-04-20

    We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in amore » swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M{sub sun} disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller {alpha} form more massive planets than disks with larger {alpha}. For Jupiter-mass planets, masses of solid cores are 10-100 M{sub +}.« less

  17. Development of 1D Particle-in-Cell Code and Simulation of Plasma-Wall Interactions

    NASA Astrophysics Data System (ADS)

    Rose, Laura P.

    This thesis discusses the development of a 1D particle-in-cell (PIC) code and the analysis of plasma-wall interactions. The 1D code (Plasma and Wall Simulation -- PAWS) is a kinetic simulation of plasma done by treating both electrons and ions as particles. The goal of this thesis is to study near wall plasma interaction to better understand the mechanism that occurs in this region. The main focus of this investigation is the effects that secondary electrons have on the sheath profile. The 1D code is modeled using the PIC method. Treating both the electrons and ions as macroparticles the field is solved on each node and weighted to each macro particle. A pre-ionized plasma was loaded into the domain and the velocities of particles were sampled from the Maxwellian distribution. An important part of this code is the boundary conditions at the wall. If a particle hits the wall a secondary electron may be produced based on the incident energy. To study the sheath profile the simulations were run for various cases. Varying background neutral gas densities were run with the 2D code and compared to experimental values. Different wall materials were simulated to show their effects of SEE. In addition different SEE yields were run, including one study with very high SEE yields to show the presence of a space charge limited sheath. Wall roughness was also studied with the 1D code using random angles of incidence. In addition to the 1D code, an external 2D code was also used to investigate wall roughness without secondary electrons. The roughness profiles where created upon investigation of wall roughness inside Hall Thrusters based off of studies done on lifetime erosion of the inner and outer walls of these devices. The 2D code, Starfish[33], is a general 2D axisymmetric/Cartesian code for modeling a wide a range of plasma and rarefied gas problems. These results show that higher SEE yield produces a smaller sheath profile and that wall roughness produces a lower SEE yield. Modeling near wall interactions is not a simple or perfected task. Due to the lack of a second dimension and a sputtering model it is not possible with this study to show the positive effects wall roughness could have on Hall thruster performance since roughness occurs from the negative affect of sputtering.

  18. EMPHASIS/Nevada UTDEM user guide. Version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, C. David; Seidel, David Bruce; Pasik, Michael Francis

    The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) ismore » a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.« less

  19. Development of a generalized multi-pixel and multi-parameter satellite remote sensing algorithm for aerosol properties

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.

    2013-12-01

    We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.

  20. Creation of fully vectorized FORTRAN code for integrating the movement of dust grains in interplanetary environments

    NASA Technical Reports Server (NTRS)

    Colquitt, Walter

    1989-01-01

    The main objective is to improve the performance of a specific FORTRAN computer code from the Planetary Sciences Division of NASA/Johnson Space Center when used on a modern vectorizing supercomputer. The code is used to calculate orbits of dust grains that separate from comets and asteroids. This code accounts for influences of the sun and 8 planets (neglecting Pluto), solar wind, and solar light pressure including Poynting-Robertson drag. Calculations allow one to study the motion of these particles as they are influenced by the Earth or one of the other planets. Some of these particles become trapped just beyond the Earth for long periods of time. These integer period resonances vary from 3 orbits of the Earth and 2 orbits of the particles to as high as 14 to 13.

  1. Gravitational tree-code on graphics processing units: implementation in CUDA

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon

    2010-05-01

    We present a new very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way we achieve a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s. It takes about a second to compute forces on a million particles with an opening angle of θ ≈ 0.5. The code has a convenient user interface and is freely available for use. http://castle.strw.leidenuniv.nl/software/octgrav.html

  2. Simulation of halo particles with Simpsons

    NASA Astrophysics Data System (ADS)

    Machida, Shinji

    2003-12-01

    Recent code improvements and some simulation results of halo particles with Simpsons will be presented. We tried to identify resonance behavior of halo particles by looking at tune evolution of individual macro particle.

  3. Charged particle tracking through electrostatic wire meshes using the finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk

    Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed.more » The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.« less

  4. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  5. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  6. Calculation of spherical harmonics and Wigner d functions by FFT. Applications to fast rotational matching in molecular replacement and implementation into AMoRe.

    PubMed

    Trapani, Stefano; Navaza, Jorge

    2006-07-01

    The FFT calculation of spherical harmonics, Wigner D matrices and rotation function has been extended to all angular variables in the AMoRe molecular replacement software. The resulting code avoids singularity issues arising from recursive formulas, performs faster and produces results with at least the same accuracy as the original code. The new code aims at permitting accurate and more rapid computations at high angular resolution of the rotation function of large particles. Test calculations on the icosahedral IBDV VP2 subviral particle showed that the new code performs on the average 1.5 times faster than the original code.

  7. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    NASA Astrophysics Data System (ADS)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  8. Beta particle transport and its impact on betavoltaic battery modeling.

    PubMed

    Alam, Tariq R; Pierson, Mark A; Prelas, Mark A

    2017-12-01

    Simulation of beta particle transport from a Ni-63 radioisotope in silicon using the Monte Carlo N-Particle (MCNP) transport code for monoenergetic beta particle average energy, monoenergetic beta particle maximum energy, and the more precise full beta energy spectrum of Ni-63 were demonstrated. The beta particle penetration depth and the shape of the energy deposition varied significantly for different transport approaches. A penetration depth of 2.25±0.25µm with a peak in energy deposition was found when using a monoenergetic beta particle average energy and a depth of 14.25±0.25µm with an exponential decrease in energy deposition was found when using a full beta energy spectrum and a 0° angular variation. For a 90° angular variation, i.e. an isotropic source, the penetration depth was decreased to 12.75±0.25µm and the backscattering coefficient increased to 0.46 with 30.55% of the beta energy escaping when using a full beta energy spectrum. Similarly, for a 0° angular variation and an isotropic source, an overprediction in the short circuit current and open circuit voltage solved by a simplified drift-diffusion model was observed when compared to experimental results from the literature. A good agreement in the results was found when self-absorption and isotope dilution in the source was considered. The self-absorption effect was 15% for a Ni-63 source with an activity of 0.25mCi. This effect increased to about 28.5% for a higher source activity of 1mCi due to an increase in thickness of the Ni-63 source. Source thicknesses of approximately 0.1µm and 0.4µm for these Ni-63 activities predicted about 15% and 28.5% self-absorption in the source, respectively, using MCNP simulations with an isotropic source. The modeling assumptions with different beta particle energy inputs, junction depth of the semiconductor, backscattering of beta particles, an isotropic beta source, and self-absorption of the radioisotope have significant impacts in betavoltaic battery design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. IMPLEMENTATION OF SINK PARTICLES IN THE ATHENA CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong Hao; Ostriker, Eve C., E-mail: hgong@astro.umd.edu, E-mail: eco@astro.princeton.edu

    2013-01-15

    We describe the implementation and tests of sink particle algorithms in the Eulerian grid-based code Athena. The introduction of sink particles enables the long-term evolution of systems in which localized collapse occurs, and it is impractical (or unnecessary) to resolve the accretion shocks at the centers of collapsing regions. We discuss the similarities and differences of our methods compared to other implementations of sink particles. Our criteria for sink creation are motivated by the properties of the Larson-Penston collapse solution. We use standard particle-mesh methods to compute particle and gas gravity together. Accretion of mass and momenta onto sinks ismore » computed using fluxes returned by the Riemann solver. A series of tests based on previous analytic and numerical collapse solutions is used to validate our method and implementation. We demonstrate use of our code for applications with a simulation of planar converging supersonic turbulent flow, in which multiple cores form and collapse to create sinks; these sinks continue to interact and accrete from their surroundings over several Myr.« less

  10. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  11. High-Speed Particle-in-Cell Simulation Parallelized with Graphic Processing Units for Low Temperature Plasmas for Material Processing

    NASA Astrophysics Data System (ADS)

    Hur, Min Young; Verboncoeur, John; Lee, Hae June

    2014-10-01

    Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.

  12. Study of premixing phase of steam explosion with JASMINE code in ALPHA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu

    Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less

  13. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  14. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering.

    PubMed

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-10-24

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.

  15. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering

    PubMed Central

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-01-01

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064

  16. Review of heavy charged particle transport in MCNP6.2

    NASA Astrophysics Data System (ADS)

    Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.

    2018-04-01

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.

  17. Review of Heavy Charged Particle Transport in MCNP6.2

    DOE PAGES

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...

    2018-01-05

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  18. Light scattering by planetary-regolith analog samples: computational results

    NASA Astrophysics Data System (ADS)

    Väisänen, Timo; Markkanen, Johannes; Hadamcik, Edith; Levasseur-Regourd, Anny-Chantal; Lasue, Jeremie; Blum, Jürgen; Penttilä, Antti; Muinonen, Karri

    2017-04-01

    We compute light scattering by a planetary-regolith analog surface. The corresponding experimental work is from Hadamcik et al. [1] with the PROGRA2-surf [2] device measuring the polarization of dust particles. The analog samples are low density (volume fraction 0.15 ± 0.03) agglomerates produced by random ballistic deposition of almost equisized silica spheres (refractive index n=1.5 and diameter 1.45 ± 0.06 µm). Computations are carried out with the recently developed codes entitled Radiative Transfer with Reciprocal Transactions (R2T2) and Radiative Transfer Coherent Backscattering with incoherent interactions (RT-CB-ic). Both codes incorporate the so-called incoherent treatment which enhances the applicability of the radiative transfer as shown by Muinonen et al. [3]. As a preliminary result, we have computed scattering from a large spherical medium with the RT-CB-ic using equal-sized particles with diameters of 1.45 microns. The preliminary results have shown that the qualitative characteristics are similar for the computed and measured intensity and polarization curves but that there are still deviations between the characteristics. We plan to remove the deviations by incorporating a size distribution of particles (1.45 ± 0.02 microns) and detailed information about the volume density profile within the analog surface. Acknowledgments: We acknowledge the ERC Advanced Grant no. 320773 entitled Scattering and Absorption of Electromagnetic Waves in Particulate Media (SAEMPL). Computational resources were provided by CSC - IT Centre for Science Ltd, Finland. References: [1] Hadamcik E. et al. (2007), JQSRT, 106, 74-89 [2] Levasseur-Regourd A.C. et al. (2015), Polarimetry of stars and planetary systems, CUP, 61-80 [3] Muinonen K. et al. (2016), extended abstract for EMTS.

  19. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  20. Dedicated vertical wind tunnel for the study of sedimentation of non-spherical particles.

    PubMed

    Bagheri, G H; Bonadonna, C; Manzella, I; Pontelandolfo, P; Haas, P

    2013-05-01

    A dedicated 4-m-high vertical wind tunnel has been designed and constructed at the University of Geneva in collaboration with the Groupe de compétence en mécanique des fluides et procédés énergétiques. With its diverging test section, the tunnel is designed to study the aero-dynamical behavior of non-spherical particles with terminal velocities between 5 and 27 ms(-1). A particle tracking velocimetry (PTV) code is developed to calculate drag coefficient of particles in standard conditions based on the real projected area of the particles. Results of our wind tunnel and PTV code are validated by comparing drag coefficient of smooth spherical particles and cylindrical particles to existing literature. Experiments are repeatable with average relative standard deviation of 1.7%. Our preliminary experiments on the effect of particle to fluid density ratio on drag coefficient of cylindrical particles show that the drag coefficient of freely suspended particles in air is lower than those measured in water or in horizontal wind tunnels. It is found that increasing aspect ratio of cylindrical particles reduces their secondary motions and they tend to be suspended with their maximum area normal to the airflow. The use of the vertical wind tunnel in combination with the PTV code provides a reliable and precise instrument for measuring drag coefficient of freely moving particles of various shapes. Our ultimate goal is the study of sedimentation and aggregation of volcanic particles (density between 500 and 2700 kgm(-3)) but the wind tunnel can be used in a wide range of applications.

  1. Multi-scale modeling of irradiation effects in spallation neutron source materials

    NASA Astrophysics Data System (ADS)

    Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.

    2011-07-01

    Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.

  2. A three-dimensional spacecraft-charging computer code

    NASA Technical Reports Server (NTRS)

    Rubin, A. G.; Katz, I.; Mandell, M.; Schnuelle, G.; Steen, P.; Parks, D.; Cassidy, J.; Roche, J.

    1980-01-01

    A computer code is described which simulates the interaction of the space environment with a satellite at geosynchronous altitude. Employing finite elements, a three-dimensional satellite model has been constructed with more than 1000 surface cells and 15 different surface materials. Free space around the satellite is modeled by nesting grids within grids. Applications of this NASA Spacecraft Charging Analyzer Program (NASCAP) code to the study of a satellite photosheath and the differential charging of the SCATHA (satellite charging at high altitudes) satellite in eclipse and in sunlight are discussed. In order to understand detector response when the satellite is charged, the code is used to trace the trajectories of particles reaching the SCATHA detectors. Particle trajectories from positive and negative emitters on SCATHA also are traced to determine the location of returning particles, to estimate the escaping flux, and to simulate active control of satellite potentials.

  3. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  4. Studies of particle wake potentials in plasmas

    NASA Astrophysics Data System (ADS)

    Ellis, Ian N.; Graziani, Frank R.; Glosli, James N.; Strozzi, David J.; Surh, Michael P.; Richards, David F.; Decyk, Viktor K.; Mori, Warren B.

    2011-09-01

    A detailed understanding of electron stopping and scattering in plasmas with variable values for the number of particles within a Debye sphere is still not at hand. Presently, there is some disagreement in the literature concerning the proper description of these processes. Theoretical models assume electrostatic (Coulomb force) interactions between particles and neglect magnetic effects. Developing and validating proper descriptions requires studying the processes using first-principle plasma simulations. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and BEPS. In this paper, we compare the wakes observed in these simulations with each other and predictions from collisionless kinetic theory. The relevance of the work to Fast Ignition is discussed.

  5. Comparison of different methods used in integral codes to model coagulation of aerosols

    NASA Astrophysics Data System (ADS)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  6. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  7. Energy deposition calculated by PHITS code in Pb spallation target

    NASA Astrophysics Data System (ADS)

    Yu, Quanzhi

    2016-01-01

    Energy deposition in a Pb spallation target irradiated by high energetic protons was calculated by PHITS2.52 code. The validation of the energy deposition and neutron production calculated by PHITS code was performed. Results show good agreements between the simulation results and the experimental data. Detailed comparison shows that for the total energy deposition, PHITS simulation result was about 15% overestimation than that of the experimental data. For the energy deposition along the length of the Pb target, the discrepancy mainly presented at the front part of the Pb target. Calculation indicates that most of the energy deposition comes from the ionizations of the primary protons and the produced secondary particles. With the event generator mode of PHITS, the deposit energy distribution for the particles and the light nulclei is presented for the first time. It indicates that the primary protons with energy more than 100 MeV are the most contributors to the total energy deposition. The energy depositions peaking at 10 MeV and 0.1 MeV, are mainly caused by the electrons, pions, d, t, 3He and also α particles during the cascade process and the evaporation process, respectively. The energy deposition density caused by different proton beam profiles are also calculated and compared. Such calculation and analyses are much helpful for better understanding the physical mechanism of energy deposition in the spallation target, and greatly useful for the thermal hydraulic design of the spallation target.

  8. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.

  9. Half-Cell RF Gun Simulations with the Electromagnetic Particle-in-Cell Code VORPAL

    NASA Astrophysics Data System (ADS)

    Paul, K.; Dimitrov, D. A.; Busby, R.; Bruhwiler, D. L.; Smithe, D.; Cary, J. R.; Kewisch, J.; Kayran, D.; Calaga, R.; Ben-Zvi, I.

    2009-01-01

    We have simulated Brookhaven National Laboratory's half-cell superconducting RF gun design for a proposed high-current ERL using the three-dimensional, electromagnetic particle-in-cell code VORPAL. VORPAL computes the fully self-consistent electromagnetic fields produced by the electron bunches, meaning that it accurately models space-charge effects as well as bunch-to-bunch beam loading effects and the effects of higher-order cavity modes, though these are beyond the scope of this paper. We compare results from VORPAL to the well-established space-charge code PARMELA, using RF fields produced by SUPERFISH, as a benchmarking exercise in which the two codes should agree well.

  10. Simulation of erosion by a particulate airflow through a ventilator

    NASA Astrophysics Data System (ADS)

    Ghenaiet, A.

    2015-08-01

    Particulate flows are a serious problem in air ventilation systems, leading to erosion of rotor blades and aerodynamic performance degradation. This paper presents the numerical results of sand particle trajectories and erosion patterns in an axial ventilator and the subsequent blade deterioration. The flow field was solved separately by using the code CFX- TASCflow. The Lagrangian approach for the solid particles tracking implemented in our inhouse code considers particle and eddy interaction, particle size distribution, particle rebounds and near walls effects. The assessment of erosion wear is based on the impact frequency and local values of erosion rate. Particle trajectories and erosion simulation revealed distinctive zones of impacts with high rates of erosion mainly on the blade pressure side, whereas the suction side is eroded around the leading edge.

  11. Energetic Particle Loss Estimates in W7-X

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team

    2017-10-01

    The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.

  12. Tristan code and its application

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  13. FLUKA: A Multi-Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  14. Expanding the genetic code for site-specific labelling of tobacco mosaic virus coat protein and building biotin-functionalized virus-like particles.

    PubMed

    Wu, F C; Zhang, H; Zhou, Q; Wu, M; Ballard, Z; Tian, Y; Wang, J Y; Niu, Z W; Huang, Y

    2014-04-18

    A method for site-specific and high yield modification of tobacco mosaic virus coat protein (TMVCP) utilizing a genetic code expanding technology and copper free cycloaddition reaction has been established, and biotin-functionalized virus-like particles were built by the self-assembly of the protein monomers.

  15. Dissemination and support of ARGUS for accelerator applications. Technical progress report, April 24, 1991--January 20, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  16. Numerical studies of the deposition of material released from fixed and rotary wing aircraft

    NASA Technical Reports Server (NTRS)

    Bilanin, A. J.; Teske, M. E.

    1984-01-01

    The computer code AGDISP (AGricultural DISPersal) has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern. In this report, the equations governing the motion of aerially released particles are developed, including a description of the evaporation model used. A series of case studies, using AGDISP, are included.

  17. Calculated effects of backscattering on skin dosimetry for nuclear fuel fragments.

    PubMed

    Aydarous, A Sh

    2008-01-01

    The size of hot particles contained in nuclear fallout ranges from 10 nm to 20 microm for the worldwide weapons fallout. Hot particles from nuclear power reactors can be significantly bigger (100 microm to several millimetres). Electron backscattering from such particles is a prominent secondary effect in beta dosimetry for radiological protection purposes, such as skin dosimetry. In this study, the effect of electron backscattering due to hot particles contamination on skin dose is investigated. These include parameters such as detector area, source radius, source energy, scattering material and source density. The Monte-Carlo Neutron Particle code (MCNP4C) was used to calculate the depth dose distribution for 10 different beta sources and various materials. The backscattering dose factors (BSDF) were then calculated. A significant dependence is shown for the BSDF magnitude upon detector area, source radius and scatterers. It is clearly shown that the BSDF increases with increasing detector area. For high Z scatterers, the BSDF can reach as high as 40 and 100% for sources with radii 0.1 and 0.0001 cm, respectively. The variation of BSDF with source radius, source energy and source density is discussed.

  18. Plume particle collection and sizing from static firing of solid rocket motors

    NASA Technical Reports Server (NTRS)

    Sambamurthi, Jay K.

    1995-01-01

    A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.

  19. SPH Simulations of Spherical Bondi Accretion: First Step of Implementing AGN Feedback in Galaxy Formation

    NASA Astrophysics Data System (ADS)

    Barai, Paramita; Proga, D.; Nagamine, K.

    2011-01-01

    Our motivation is to numerically test the assumption of Black Hole (BH) accretion (that the central massive BH of a galaxy accretes mass at the Bondi-Hoyle accretion rate, with ad-hoc choice of parameters), made in many previous galaxy formation studies including AGN feedback. We perform simulations of a spherical distribution of gas, within the radius range 0.1 - 200 pc, accreting onto a central supermassive black hole (the Bondi problem), using the 3D Smoothed Particle Hydrodynamics code Gadget. In our simulations we study the radial distribution of various gas properties (density, velocity, temperature, Mach number). We compute the central mass inflow rate at the inner boundary (0.1 pc), and investigate how different gas properties (initial density and velocity profiles) and computational parameters (simulation outer boundary, particle number) affect the central inflow. Radiative processes (namely heating by a central X-ray corona and gas cooling) have been included in our simulations. We study the thermal history of accreting gas, and identify the contribution of radiative and adiabatic terms in shaping the gas properties. We find that the current implementation of artificial viscosity in the Gadget code causes unwanted extra heating near the inner radius.

  20. Convolution Operations on Coding Metasurface to Reach Flexible and Continuous Controls of Terahertz Beams.

    PubMed

    Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang

    2016-10-01

    The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.

  1. Computed secondary-particle energy spectra following nonelastic neutron interactions with C-12 for E(n) between 15 and 60 MeV: Comparisons of results from two calculational methods

    NASA Astrophysics Data System (ADS)

    Dickens, J. K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d(sigma)/dE, following nonelastic neutron interactions with C-12 for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed.

  2. Stopping power for 4.8-6.8 MeV C ions along [1 1 0] and [1 1 1] directions in Si

    NASA Astrophysics Data System (ADS)

    Yoneda, Tomoaki; Horikawa, Junsei; Saijo, Satoshi; Arakawa, Masakazu; Yamamoto, Yukio; Yamamoto, Yasukazu

    2018-06-01

    The stopping power for C ions with energies in the range of 4.8-6.8 MeV were investigated in a SIMOX (Separation by IMplanted OXygen into silicon) structure of Si(1 0 0)/SiO2/Si(1 0 0). Backscattering spectra were measured for random and channeling incidence along the [1 1 0] and [1 1 1] axes. The scattering angle was set to 90° to avoid an excessive decrease of the kinematic factor. The ratios of [1 1 0] and [1 1 1] channeling to the random stopping power were determined to be around 0.65 and 0.77 for 4.8-6.8 MeV ions, respectively. The validity of the impact parameter dependent stopping power calculated using Grande and Schiwietz's CasP (convolution approximation for swift particles) code was confirmed. The C ion trajectories and flux distributions in crystalline silicon were calculated by Monte Carlo simulation. The stopping power calculated with the CasP code is almost in agreement with the experimental results within the accuracy of measurement.

  3. Analysis of dose-LET distribution in the human body irradiated by high energy hadrons.

    PubMed

    Sato, T; Tsuda, S; Sakamoto, Y; Yamaguchi, Y; Niita, K

    2003-01-01

    For the purposes of radiological protection, it is important to analyse profiles of the particle field inside a human body irradiated by high energy hadrons, since they can produce a variety of secondary particles which play an important role in the energy deposition process, and characterise their radiation qualities. Therefore Monte Carlo calculations were performed to evaluate dose distributions in terms of the linear energy transfer of ionising particles (dose-LET distribution) using a newly developed particle transport code (Particle and Heavy Ion Transport code System, PHITS) for incidences of neutrons, protons and pions with energies from 100 MeV to 200 GeV. Based on these calculations, it was found that more than 80% and 90% of the total deposition energies are attributed to ionisation by particles with LET below 10 keV microm(-1) for the irradiations of neutrons and the charged particles, respectively.

  4. The long non-coding RNA PARTICLE is associated with WWOX and the absence of FRA16D breakage in osteosarcoma patients.

    PubMed

    O'Leary, Valerie Bríd; Maugg, Doris; Smida, Jan; Baumhoer, Daniel; Nathrath, Michaela; Ovsepian, Saak Victor; Atkinson, Michael John

    2017-10-20

    Breakage of the fragile site FRA16D disrupts the WWOX (WW Domain Containing Oxidoreductase) tumor suppressor gene in osteosarcoma. However, the frequency of breakage is not sufficient to explain the rate of WWOX loss in pathogenesis. The involvement of non-coding RNA transcripts is proposed due to their accumulation at fragile sites, where they are advocated to influence specific chromosomal regions associated with malignancy. The long ncRNA PARTICLE (promoter of MAT2A antisense radiation-induced circulating long non-coding RNA) is transiently elevated in response to irradiation and influences epigenetic silencing modification within WWOX . It now emerges that elevated PARTICLE levels are significantly associated with FRA16D non-breakage in OS patients. Although not associated with overall survival, high PARTICLE levels were found to be significantly linked to metastasis free outcome. The transcription of both PARTICLE and WWOX are transiently responsive to exposure to low doses of radiation in osteosarcoma cell lines. Herein, a relationship between WWOX and PARTICLE transcription is suggested in human osteosarcoma cell lines representing alternative genetic backgrounds. PARTICLE over-expression ameliorated WWOX promoter activity in U2OS harboring FRA16D non-breakage. It can be concluded that the lncRNA PARTICLE influences the WWOX tumor suppressor and in the absence of WWOX FRA16D breakage, it is associated with OS metastasis-free survival.

  5. Importance biasing scheme implemented in the PRIZMA code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-12-31

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.

  6. ION EFFECTS IN THE APS PARTICLE ACCUMULATOR RING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvey, J.; Harkay, K.; Yao, CY.

    2017-06-25

    Trapped ions in the APS Particle Accumulator Ring (PAR) lead to a positive coherent tune shift in both planes, which increases along the PAR cycle as more ions accumulate. This effect has been studied using an ion simulation code developed at SLAC. After modifying the code to include a realistic vacuum profile, multiple ionization, and the effect of shaking the beam to measure the tune, the simulation agrees well with our measurements. This code has also been used to evaluate the possibility of ion instabilities at the high bunch charge needed for the APS-Upgrade.

  7. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  8. Gravitational Instability of Small Particles in Stratified Dusty Disks

    NASA Astrophysics Data System (ADS)

    Shi, J.; Chiang, E.

    2012-12-01

    Self-gravity is an attractive means of forming the building blocks of planets, a.k.a. the first-generation planetesimals. For ensembles of dust particles to aggregate into self-gravitating, bound structures, they must first collect into regions of extraordinarily high density in circumstellar gas disks. We have modified the ATHENA code to simulate dusty, compressible, self-gravitating flows in a 3D shearing box configuration, working in the limit that dust particles are small enough to be perfectly entrained in gas. We have used our code to determine the critical density thresholds required for disk gas to undergo gravitational collapse. In the strict limit that the stopping times of particles in gas are infinitesimally small, our numerical simulations and analytic calculations reveal that the critical density threshold for gravitational collapse is orders of magnitude above what has been commonly assumed. We discuss how finite but still short stopping times under realistic conditions can lower the threshold to a level that may be attainable. Nonlinear development of gravitational instability in a stratified dusty disk. Shown are volume renderings of dust density for the bottom half of a disk at t=0, 6, 8, and 9 Omega^{-1}. The initial disk first develops shearing density waves. These waves then steep and form long extending filament along the azimuth. These filaments eventually break and form very dense dust clumps. The time evolution of the maximum dust density within the simulation box. Run std32 stands for a standard run which has averaged Toomre's Q=0.5. Qgtrsim 1.0 for the rest runs in the plot (Z1 has twice metallicity than the standard; Q1 has twice Q_g, the Toomre's Q for the gas disk alone; M1 has twice the dust-to-gas ratio than the standard at the midplane; R1 is constructed so that the midplane density exceeds the Roche criterion however the Toomre's Q is above unity.)

  9. Physical Models for Particle Tracking Simulations in the RF Gap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shishlo, Andrei P.; Holmes, Jeffrey A.

    2015-06-01

    This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.

  10. The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, T.E.; Mihalczo, J.T.

    1995-12-31

    This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.

  11. Particle-in-cell code library for numerical simulation of the ECR source plasma

    NASA Astrophysics Data System (ADS)

    Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.

    2003-05-01

    The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.

  12. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Shawn A.

    The Mercury Monte Carlo particle transport code is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. In the proposed Trinity Open Science calculations, I will investigate computer science aspects of the code which are relevant to convergence of the simulation quantities with increasing Monte Carlo particle counts.

  13. A computer program for two-particle intrinsic coefficients of fractional parentage

    NASA Astrophysics Data System (ADS)

    Deveikis, A.

    2012-06-01

    A Fortran 90 program CESOS for the calculation of the two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin and an arbitrary number of oscillator quanta (CESOs) is presented. The implemented procedure for CESOs calculation consistently follows the principles of antisymmetry and translational invariance. The approach is based on a simple enumeration scheme for antisymmetric many-particle states, efficient algorithms for calculation of the coefficients of fractional parentage for j-shells with isospin, and construction of the subspace of the center-of-mass Hamiltonian eigenvectors corresponding to the minimal eigenvalue equal to 3/2 (in ℏω). The program provides fast calculation of CESOs for a given particle number and produces results possessing small numerical uncertainties. The introduced CESOs may be used for calculation of expectation values of two-particle nuclear shell-model operators within the isospin formalism. Program summaryProgram title: CESOS Catalogue identifier: AELT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 932 No. of bytes in distributed program, including test data, etc.: 61 023 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows XP, Linux RAM: The memory demand depends on the number of particles A and the excitation energy of the system E. Computation of the A=6 particle system with the total angular momentum J=0 and the total isospin T=1 requires around 4 kB of RAM at E=0,˜3 MB at E=3, and ˜172 MB at E=5. Classification: 17.18 Nature of problem: The code CESOS generates a list of two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that CESOs may be obtained by diagonalizing the center-of-mass Hamiltonian in the basis set of antisymmetric A-particle oscillator functions with singled out dependence on Jacobi coordinates of two last particles and choosing the subspace of its eigenvectors corresponding to the minimal eigenvalue equal to 3/2. Restrictions: One run of the code CESOS generates CESOs for one specified set of (A,E,J,T) values only. The restrictions on the (A,E,J,T) values are completely determined by the restrictions on the computation of the single-shell CFPs and two-particle multishell CFPs (GCFPs) [1]. The full sets of single-shell CFPs may be calculated up to the j=9/2 shell (for any particular shell of the configuration); the shell with j⩾11/2 cannot get full (it is the implementation constraint). The calculation of GCFPs is limited by A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations. Any allowed values of J and T may be chosen for the specified values of A and E. The complete list of allowed values of J and T for the chosen values of A and E may be generated by the GCFP program - CPC Program Library, Catalogue Id. AEBI_v1_0. The actual scale of the CESOs computation problem depends strongly on the magnitude of the A and E values. Though there are no limitations on A and E values (within the limits of single-shell CFPs and multishell CFPs calculation), however the generation of corresponding list of CESOs is the subject of available computing resources. For example, the computing time of CESOs for A=6, JT=10 at E=5 took around 14 hours. The system with A=11, JT=1/23/2 at E=2 requires around 15 hours. These computations were performed on Pentium 3 GHz PC with 1 GB RAM [2]. Unusual features: It is possible to test the computed CESOs without saving them to a file. This allows the user to learn their number and approximate computation time and to evaluate the accuracy of calculations. Additional comments: The program CESOS uses the code from GCFP program for calculation of the two-particle multishell coefficients of fractional parentage. Running time: It depends on the size of the problem. The A=6 particle system with the JT=01 took around 31 seconds on Pentium 3 GHz PC with 1 GB RAM at E=3 and about 2.6 hours at E=5.

  14. Nano-particle drag prediction at low Reynolds number using a direct Boltzmann-BGK solution approach

    NASA Astrophysics Data System (ADS)

    Evans, B.

    2018-01-01

    This paper outlines a novel approach for solution of the Boltzmann-BGK equation describing molecular gas dynamics applied to the challenging problem of drag prediction of a 2D circular nano-particle at transitional Knudsen number (0.0214) and low Reynolds number (0.25-2.0). The numerical scheme utilises a discontinuous-Galerkin finite element discretisation for the physical space representing the problem particle geometry and a high order discretisation for molecular velocity space describing the molecular distribution function. The paper shows that this method produces drag predictions that are aligned well with the range of drag predictions for this problem generated from the alternative numerical approaches of molecular dynamics codes and a modified continuum scheme. It also demonstrates the sensitivity of flow-field solutions and therefore drag predictions to the wall absorption parameter used to construct the solid wall boundary condition used in the solver algorithm. The results from this work has applications in fields ranging from diagnostics and therapeutics in medicine to the fields of semiconductors and xerographics.

  15. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  16. Molecular Dynamic Studies of Particle Wake Potentials in Plasmas

    NASA Astrophysics Data System (ADS)

    Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren

    2010-11-01

    Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (P^3M) code ddcMD to perform these simulations. As a starting point in our study, we examined the wake of a particle passing through a plasma. In this poster, we compare the wake observed in 3D ddcMD simulations with that predicted by Vlasov theory and those observed in the electrostatic PIC code BEPS where the cell size was reduced to .03λD.

  17. TWANG-PIC, a novel gyro-averaged one-dimensional particle-in-cell code for interpretation of gyrotron experiments

    NASA Astrophysics Data System (ADS)

    Braunmueller, F.; Tran, T. M.; Vuillemin, Q.; Alberti, S.; Genoud, J.; Hogge, J.-Ph.; Tran, M. Q.

    2015-06-01

    A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is the case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.

  18. TWANG-PIC, a novel gyro-averaged one-dimensional particle-in-cell code for interpretation of gyrotron experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braunmueller, F., E-mail: falk.braunmueller@epfl.ch; Tran, T. M.; Alberti, S.

    A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is themore » case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  20. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  1. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  2. Numerical and Experimental Investigations of the Flow in a Stationary Pelton Bucket

    NASA Astrophysics Data System (ADS)

    Nakanishi, Yuji; Fujii, Tsuneaki; Kawaguchi, Sho

    A numerical code based on one of mesh-free particle methods, a Moving-Particle Semi-implicit (MPS) Method has been used for the simulation of free surface flows in a bucket of Pelton turbines so far. In this study, the flow in a stationary bucket is investigated by MPS simulation and experiment to validate the numerical code. The free surface flow dependent on the angular position of the bucket and the corresponding pressure distribution on the bucket computed by the numerical code are compared with that obtained experimentally. The comparison shows that numerical code based on MPS method is useful as a tool to gain an insight into the free surface flows in Pelton turbines.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  4. Dust-Particle Transport in Tokamak Edge Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pigarov, A Y; Krasheninnikov, S I; Soboleva, T K

    2005-09-12

    Dust particulates in the size range of 10nm-100{micro}m are found in all fusion devices. Such dust can be generated during tokamak operation due to strong plasma/material-surface interactions. Some recent experiments and theoretical estimates indicate that dust particles can provide an important source of impurities in the tokamak plasma. Moreover, dust can be a serious threat to the safety of next-step fusion devices. In this paper, recent experimental observations on dust in fusion devices are reviewed. A physical model for dust transport simulation, and a newly developed code DUSTT, are discussed. The DUSTT code incorporates both dust dynamics due to comprehensivemore » dust-plasma interactions as well as the effects of dust heating, charging, and evaporation. The code tracks test dust particles in realistic plasma backgrounds as provided by edge-plasma transport codes. Results are presented for dust transport in current and next-step tokamaks. The effect of dust on divertor plasma profiles and core plasma contamination is examined.« less

  5. Implementation of an anomalous radial transport model for continuum kinetic edge codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2007-11-01

    Radial plasma transport in magnetic fusion devices is often dominated by plasma turbulence compared to neoclassical collisional transport. Continuum kinetic edge codes [such as the (2d,2v) transport version of TEMPEST and also EGK] compute the collisional transport directly, but there is a need to model the anomalous transport from turbulence for long-time transport simulations. Such a model is presented and results are shown for its implementation in the TEMPEST gyrokinetic edge code. The model includes velocity-dependent convection and diffusion coefficients expressed as a Hermite polynominals in velocity. The specification of the Hermite coefficients can be set, e.g., by specifying the ratio of particle and energy transport as in fluid transport codes. The anomalous transport terms preserve the property of no particle flux into unphysical regions of velocity space. TEMPEST simulations are presented showing the separate control of particle and energy anomalous transport, and comparisons are made with neoclassical transport also included.

  6. Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST

    NASA Astrophysics Data System (ADS)

    Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2018-04-01

    We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.

  7. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  8. Collision Models for Particle Orbit Code on SSX

    NASA Astrophysics Data System (ADS)

    Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.

    2011-10-01

    Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.

  9. Ignition and combustion characteristics of metallized propellants, phase 2

    NASA Technical Reports Server (NTRS)

    Mueller, D. C.; Turns, S. R.

    1994-01-01

    Experimental and analytical investigations focusing on aluminum/hydrocarbon gel droplet secondary atomization and its effects on gel-fueled rocket engine performance are being conducted. A single laser sheet sizing/velocimetry diagnostic technique, which should eliminate sizing bias in the data collection process, has been designed and constructed to overcome limitations of the two-color forward-scatter technique used in previous work. Calibration of this system is in progress and the data acquisition/validation code is being written. Narrow-band measurements of radiant emission, discussed in previous reports, will be used to determine if aluminum ignition has occurred in a gel droplet. A one-dimensional model of a gel-fueled rocket combustion chamber, described in earlier reports, has been exercised in conjunction with a two-dimensional, two-phase nozzle code to predict the performance of an aluminum/hydrocarbon fueled engine. Estimated secondary atomization effects on propellant burnout distance, condensed particle radiation losses to the chamber walls, and nozzle two phase flow losses are also investigated. Calculations indicate that only modest secondary atomization is required to significantly reduce propellant burnout distances, aluminum oxide residual size, and radiation heat losses. Radiation losses equal to approximately 2-13 percent of the energy released during combustion were estimated, depending on secondary atomization intensity. A two-dimensional, two-phase nozzle code was employed to estimate radiation and nozzle two phase flow effects on overall engine performance. Radiation losses yielded a one percent decrease in engine Isp. Results also indicate that secondary atomization may have less effect on two-phase losses than it does on propellant burnout distance and no effect if oxide particle coagulation and shear induced droplet breakup govern oxide particle size. Engine Isp was found to decrease from 337.4 to 293.7 seconds as gel aluminum mass loading was varied from 0-70 wt percent. Engine Isp efficiencies, accounting for radiation and two phase flow effects, on the order of 0.946 were calculated for a 60 wt percent gel, assuming a fragmentation ratio of five.

  10. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  11. Computation of Cosmic Ray Ionization and Dose at Mars: a Comparison of HZETRN and Planetocosmics for Proton and Alpha Particles

    NASA Technical Reports Server (NTRS)

    Gronoff, Guillaume; Norman, Ryan B.; Mertens, Christopher J.

    2014-01-01

    The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the Geant4 application Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, allowing for the comparison of varying input radiation environments. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.

  12. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  13. 2D Implosion Simulations with a Kinetic Particle Code

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Even, Wesley; Strother, Terrance

    2017-10-01

    Many problems in laboratory and plasma physics are subject to flows that move between the continuum and the kinetic regime. We discuss two-dimensional (2D) implosion simulations that were performed using a Monte Carlo kinetic particle code. The application of kinetic transport theory is motivated, in part, by the occurrence of non-equilibrium effects in inertial confinement fusion (ICF) capsule implosions, which cannot be fully captured by hydrodynamics simulations. Kinetic methods, on the other hand, are able to describe both, continuum and rarefied flows. We perform simple 2D disk implosion simulations using one particle species and compare the results to simulations with the hydrodynamics code RAGE. The impact of the particle mean-free-path on the implosion is also explored. In a second study, we focus on the formation of fluid instabilities from induced perturbations. I.S. acknowledges support through the Director's fellowship from Los Alamos National Laboratory. This research used resources provided by the LANL Institutional Computing Program.

  14. LANL LDRD-funded project: Test particle simulations of energetic ions in natural and artificial radiation belts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowee, Misa; Liu, Kaijun; Friedel, Reinhard H.

    2012-07-17

    We summarize the scientific problem and work plan for the LANL LDRD-funded project to use a test particle code to study the sudden de-trapping of inner belt protons and possible cross-L transport of debris ions after a high altitude nuclear explosion (HANE). We also discuss future application of the code for other HANE-related problems.

  15. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  16. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    NASA Astrophysics Data System (ADS)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  17. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  18. Measurements of confined alphas and tritons in the MHD quiescent core of TFTR plasmas using the pellet charge exchange diagnostic

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Budny, R. V.; Mansfield, D. K.; Redi, M. H.; Roquemore, A. L.; Fisher, R. K.; Duong, H. H.; McChesney, J. M.; Parks, P. B.; Petrov, M. P.; Gorelenkov, N. N.

    1996-10-01

    The energy distributions and radial density profiles of the fast confined trapped alpha particles in DT experiments on TFTR are being measured in the energy range 0.5 - 3.5 MeV using the pellet charge exchange (PCX) diagnostic. A brief description of the measurement technique which involves active neutral particle analysis using the ablation cloud surrounding an injected impurity pellet as the neutralizer is presented. This paper focuses on alpha and triton measurements in the core of MHD quiescent TFTR discharges where the expected classical slowing-down and pitch angle scattering effects are not complicated by stochastic ripple diffusion and sawtooth activity. In particular, the first measurement of the alpha slowing-down distribution up to the birth energy, obtained using boron pellet injection, is presented. The measurements are compared with predictions using either the TRANSP Monte Carlo code and/or a Fokker - Planck Post-TRANSP processor code, which assumes that the alphas and tritons are well confined and slow down classically. Both the shape of the measured alpha and triton energy distributions and their density ratios are in good agreement with the code calculations. We can conclude that the PCX measurements are consistent with classical thermalization of the fusion-generated alphas and tritons.

  19. CNEA/ANL collaboration program to develop an optimized version of DART validation and assessment by means of U{sub 3}Si{sub x} and U{sub 3}O{sub 8-}Al dispersed CNEA miniplate irradiation behavior.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solis, D.

    1998-10-16

    The DART code is based upon a thermomechanical model that can predict swelling, recrystallization, fuel-meat interdiffusion and other issues related with MTR dispersed FE behavior under irradiation. As a part of a common effort to develop an optimized version of DART, a comparison between DART predictions and CNEA miniplates irradiation experimental data was made. The irradiation took place during 1981-82 for U3O8 miniplates and 1985-86 for U{sub 3}Si{sub x} at Oak Ridge Research Reactor (ORR). The microphotographs were studied by means of IMAWIN 3.0 Image Analysis Code and different fission gas bubbles distributions were obtained. Also it was possible tomore » find and identify different morphologic zones. In both kinds of fuels, different phases were recognized, like particle peripheral zones with evidence of Al-U reaction, internal recrystallized zones and bubbles. A very good agreement between code prediction and irradiation results was found. The few discrepancies are due to local, fabrication and irradiation uncertainties, as the presence of U{sub 3}Si phase in U{sub 3}Si{sub 2} particles and effective burnup.« less

  20. The Splashback Radius of Halos from Particle Dynamics. I. The SPARTA Algorithm

    NASA Astrophysics Data System (ADS)

    Diemer, Benedikt

    2017-07-01

    Motivated by the recent proposal of the splashback radius as a physical boundary of dark-matter halos, we present a parallel computer code for Subhalo and PARticle Trajectory Analysis (SPARTA). The code analyzes the orbits of all simulation particles in all host halos, billions of orbits in the case of typical cosmological N-body simulations. Within this general framework, we develop an algorithm that accurately extracts the location of the first apocenter of particles after infall into a halo, or splashback. We define the splashback radius of a halo as the smoothed average of the apocenter radii of individual particles. This definition allows us to reliably measure the splashback radii of 95% of host halos above a resolution limit of 1000 particles. We show that, on average, the splashback radius and mass are converged to better than 5% accuracy with respect to mass resolution, snapshot spacing, and all free parameters of the method.

  1. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  2. Computer program for prediction of the deposition of material released from fixed and rotary wing aircraft

    NASA Technical Reports Server (NTRS)

    Teske, M. E.

    1984-01-01

    This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchill, R. M.; Chang, C. S.; Ku, S.

    Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer (SOL)) is required in order to reliably predict performance in future fusion devices. We explore turbulent characteristics in the edge region from a multi-scale neoclassical and turbulent XGC1 gyrokinetic simulation in a DIII-D like tokamak geometry, here excluding neutrals and collisions. For an H-mode type plasma with steep pedestal, it is found that the electron density fluctuations increase towards the separatrix, and stay high well into the SOL, reaching a maximum value ofmore » $$\\delta {n}_{e}/{\\bar{n}}_{e}\\sim 0.18$$. Blobs are observed, born around the magnetic separatrix surface and propagate radially outward with velocities generally less than 1 km s –1. Strong poloidal motion of the blobs is also present, near 20 km s –1, consistent with E × B rotation. The electron density fluctuations show a negative skewness in the closed field-line pedestal region, consistent with the presence of 'holes', followed by a transition to strong positive skewness across the separatrix and into the SOL. These simulations indicate that not only neoclassical phenomena, but also turbulence, including the blob-generation mechanism, can remain important in the steep H-mode pedestal and SOL. Lastly, qualitative comparisons will be made to experimental observations.« less

  4. Parallelization Issues and Particle-In Codes.

    NASA Astrophysics Data System (ADS)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.

  5. Improvements of the particle-in-cell code EUTERPE for petascaling machines

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.

    2011-09-01

    In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.

  6. Studying Spacecraft Charging via Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Delzanno, G. L.; Moulton, D.; Meierbachtol, C.; Svyatskiy, D.; Vernon, L.

    2015-12-01

    The electrical charging of spacecraft due to bombarding charged particles can affect their performance and operation. We study this charging using CPIC; a particle-in-cell code specifically designed for studying plasma-material interactions [1]. CPIC is based on multi-block curvilinear meshes, resulting in near-optimal computational performance while maintaining geometric accuracy. Relevant plasma parameters are imported from the SHIELDS framework (currently under development at LANL), which simulates geomagnetic storms and substorms in the Earth's magnetosphere. Simulated spacecraft charging results of representative Van Allen Probe geometries using these plasma parameters will be presented, along with an overview of the code. [1] G.L. Delzanno, E. Camporeale, J.D. Moulton, J.E. Borovsky, E.A. MacDonald, and M.F. Thomsen, "CPIC: A Curvilinear Particle-In-Cell Code for Plasma-Material Interaction Studies," IEEE Trans. Plas. Sci., 41 (12), 3577 (2013).

  7. Improving the efficiency of quantum hash function by dense coding of coin operators in discrete-time quantum walk

    NASA Astrophysics Data System (ADS)

    Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin

    2018-03-01

    Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.

  8. Optical, microphysical, mass and geometrical properties of aged volcanic particles observed over Athens, Greece, during the Eyjafjallajökull eruption in April 2010 through synergy of Raman lidar and sunphotometer measurements

    NASA Astrophysics Data System (ADS)

    Kokkalis, P.; Papayannis, A.; Amiridis, V.; Mamouri, R. E.; Veselovskii, I.; Kolgotin, A.; Tsaknakis, G.; Kristiansen, N. I.; Stohl, A.; Mona, L.

    2013-09-01

    Vertical profiles of the optical (extinction and backscatter coefficients, lidar ratio and Ångström exponent), microphysical (mean effective radius, mean refractive index, mean number concentration) and geometrical properties as well as the mass concentration of volcanic particles from the Eyjafjallajökull eruption were retrieved at selected heights over Athens, Greece, using multi-wavelength Raman lidar measurements performed during the period 21-24 April 2010. Aerosol Robotic Network (AERONET) particulate columnar measurements along with inversion schemes were initialized together with lidar observations to deliver the aforementioned products. The well-known FLEXPART (FLEXible PARTicle dispersion model) model used for volcanic dispersion simulations is initiated as well in order to estimate the horizontal and vertical distribution of volcanic particles. Compared with the lidar measurements within the planetary boundary layer over Athens, FLEXPART proved to be a useful tool for determining the state of mixing of ash with other, locally emitted aerosol types. The major findings presented in our work concern the identification of volcanic particles layers in the form of filaments after 7-day transport from the volcanic source (approximately 4000 km away from our site) from the surface and up to 10 km according to the lidar measurements. Mean hourly averaged lidar signals indicated that the layer thickness of volcanic particles ranged between 1.5 and 2.2 km. The corresponding aerosol optical depth was found to vary from 0.01 to 0.18 at 355 nm and from 0.02 up to 0.17 at 532 nm. Furthermore, the corresponding lidar ratios (S) ranged between 60 and 80 sr at 355 nm and 44 and 88 sr at 532 nm. The mean effective radius of the volcanic particles estimated by applying inversion scheme to the lidar data found to vary within the range 0.13-0.38 μm and the refractive index ranged from 1.39+0.009i to 1.48+0.006i. This high variability is most probably attributed to the mixing of aged volcanic particles with other aerosol types of local origin. Finally, the LIRIC (LIdar/Radiometer Inversion Code) lidar/sunphotometric combined inversion algorithm has been applied in order to retrieve particle concentrations. These have been compared with FLEXPART simulations of the vertical distribution of ash showing good agreement concerning not only the geometrical properties of the volcanic particles layers but also the particles mass concentration.

  9. Calculation of four-particle harmonic-oscillator transformation brackets

    NASA Astrophysics Data System (ADS)

    Germanas, D.; Kalinauskas, R. K.; Mickevičius, S.

    2010-02-01

    A procedure for precise calculation of the three- and four-particle harmonic-oscillator (HO) transformation brackets is presented. The analytical expressions of the four-particle HO transformation brackets are given. The computer code for the calculations of HO transformation brackets proves to be quick, efficient and produces results with small numerical uncertainties. Program summaryProgram title: HOTB Catalogue identifier: AEFQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1247 No. of bytes in distributed program, including test data, etc.: 6659 Distribution format: tar.gz Programming language: FORTRAN 90 Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix RAM: 8 MB Classification: 17.17 Nature of problem: Calculation of the three-particle and four-particle harmonic-oscillator transformation brackets. Solution method: The method is based on compact expressions of the three-particle harmonics oscillator brackets, presented in [1] and expressions of the four-particle harmonics oscillator brackets, presented in this paper. Restrictions: The three- and four-particle harmonic-oscillator transformation brackets up to the e=28. Unusual features: Possibility of calculating the four-particle harmonic-oscillator transformation brackets. Running time: Less than one second for the single harmonic-oscillator transformation bracket. References:G.P. Kamuntavičius, R.K. Kalinauskas, B.R. Barret, S. Mickevičius, D. Germanas, Nuclear Physics A 695 (2001) 191.

  10. Design studies of the Ku-band, wide-band Gyro-TWT amplifier

    NASA Astrophysics Data System (ADS)

    Jung, Sang Wook; Lee, Han Seul; Jang, Kwong Ho; Choi, Jin Joo; Hong, Yong Jun; Shin, Jin Woo; So, Jun Ho; Won, Jong Hyo

    2014-02-01

    This paper reports a Ku-band, wide band Gyrotron-Traveling-wave-tube(Gyro-TWT) that is currently being developed at Kwangwoon University. The Gyro-TWT has a two stage linear tapered interaction circuit to obtain a wide operating bandwidth. The linearly-tapered interaction circuit and nonlinearly-tapered magnetic field gives the Gyro-TWT a wide operating bandwidth. The Gyro-TWT bandwidth is 23%. The 2d-Particle-in-cell(PIC) and MAGIC2d code simulation results are 17.3 dB and 24.34 kW, respectively for the maximum saturated output power. A double anode MIG was simulated with E-Gun code. The results were 0.7 for the transvers to the axial beam velocity ratio (=alpha) and a 2.3% axial velocity spread at 50 kV and 4 A. A magnetic field profile simulation was performed by using the Poisson code to obtain the grazing magnetic field of the entire interaction circuit with Poisson code.

  11. A computational and theoretical analysis of falling frequency VLF emissions

    NASA Astrophysics Data System (ADS)

    Nunn, David; Omura, Yoshiharu

    2012-08-01

    Recently much progress has been made in the simulation and theoretical understanding of rising frequency triggered emissions and rising chorus. Both PIC and Vlasov VHS codes produce risers in the region downstream from the equator toward which the VLF waves are traveling. The VHS code only produces fallers or downward hooks with difficulty due to the coherent nature of wave particle interaction across the equator. With the VHS code we now confine the interaction region to be the region upstream from the equator, where inhomogeneity factor S is positive. This suppresses correlated wave particle interaction effects across the equator and the tendency of the code to trigger risers, and permits the formation of a proper falling tone generation region. The VHS code now easily and reproducibly triggers falling tones. The evolution of resonant particle current JE in space and time shows a generation point at -5224 km and the wavefield undergoes amplification of some 25 dB in traversing the nonlinear generation region. The current component parallel to wave magnetic field (JB) is positive, whereas it is negative for risers. The resonant particle trap shows an enhanced distribution function or `hill', whereas risers have a `hole'. According to recent theory (Omura et al., 2008, 2009) sweeping frequency is due primarily to the advective term. The nonlinear frequency shift term is now negative (˜-12 Hz) and the sweep rate of -800 Hz/s is approximately nonlinear frequency shift divided by TN, the transition time, of the order of a trapping time.

  12. Modeling Spectra of Icy Satellites and Cometary Icy Particles Using Multi-Sphere T-Matrix Code

    NASA Astrophysics Data System (ADS)

    Kolokolova, Ludmilla; Mackowski, Daniel; Pitman, Karly M.; Joseph, Emily C. S.; Buratti, Bonnie J.; Protopapa, Silvia; Kelley, Michael S.

    2016-10-01

    The Multi-Sphere T-matrix code (MSTM) allows rigorous computations of characteristics of the light scattered by a cluster of spherical particles. It was introduced to the scientific community in 1996 (Mackowski & Mishchenko, 1996, JOSA A, 13, 2266). Later it was put online and became one of the most popular codes to study photopolarimetric properties of aggregated particles. Later versions of this code, especially its parallelized version MSTM3 (Mackowski & Mishchenko, 2011, JQSRT, 112, 2182), were used to compute angular and wavelength dependence of the intensity and polarization of light scattered by aggregates of up to 4000 constituent particles (Kolokolova & Mackowski, 2012, JQSRT, 113, 2567). The version MSTM4 considers large thick slabs of spheres (Mackowski, 2014, Proc. of the Workshop ``Scattering by aggregates``, Bremen, Germany, March 2014, Th. Wriedt & Yu. Eremin, Eds., 6) and is significantly different from the earlier versions. It adopts a Discrete Fourier Convolution, implemented using a Fast Fourier Transform, for evaluation of the exciting field. MSTM4 is able to treat dozens of thousands of spheres and is about 100 times faster than the MSTM3 code. This allows us not only to compute the light scattering properties of a large number of electromagnetically interacting constituent particles, but also to perform multi-wavelength and multi-angular computations using computer resources with rather reasonable CPU and computer memory. We used MSTM4 to model near-infrared spectra of icy satellites of Saturn (Rhea, Dione, and Tethys data from Cassini VIMS), and of icy particles observed in the coma of comet 103P/Hartley 2 (data from EPOXI/DI HRII). Results of our modeling show that in the case of icy satellites the best fit to the observed spectra is provided by regolith made of spheres of radius ~1 micron with a porosity in the range 85% - 95%, which slightly varies for the different satellites. Fitting the spectra of the cometary icy particles requires icy aggregates of size larger than 40 micron with constituent spheres in the micron size range.

  13. Reduced discretization error in HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less

  14. Studies of Particle Wake Potentials in Plasmas

    NASA Astrophysics Data System (ADS)

    Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren

    2011-10-01

    Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and with BEPS using various cell sizes. In this poster, we compare the wakes we observe in these simulations with each other and predictions from Vlasov theory. Prepared by LLNL under Contract DE-AC52-07NA27344 and by UCLA under Grant DE-FG52-09NA29552.

  15. Prompt Radiation Protection Factors

    DTIC Science & Technology

    2018-02-01

    dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat

  16. Use of Existing CAD Models for Radiation Shielding Analysis

    NASA Technical Reports Server (NTRS)

    Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.

    2015-01-01

    The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.

  17. Design and development of a chopping and deflecting system for the high current injector at IUAC

    NASA Astrophysics Data System (ADS)

    Kedia, Sanjay Kumar; Mehta, R.

    2018-05-01

    The Low Energy Beam Transport (LEBT) section of the High Current Injector (HCI) incorporates a Chopping cum Deflecting System (CDS). The CDS comprises of a deflecting system and a pair of slits that will remove dark current and produce time bunched beam of 60 ns at different repetition rates of 4, 2, 1, 0.5, 0.25 and 0.125 MHz. The distinguishing feature of the design is the use of a multi-plate deflecting structure with low capacitance to optimize the electric field, which in turn results in higher efficiency in terms of achievable ion current. To maximize the effective electric field and its uniformity, the gap between the deflecting plates has been varied and a semi-circular contour has been incorporated on the deflecting plates. Due to this the electric field variation is less than ±0.5% within the plate length. The length of deflecting plates was chosen to maximize the transmission efficiency. Since the velocity of the charged particles in the LEBT section is constant, therefore the separation between two successive sets of deflecting plates has been kept constant to match the ions transient time within the gap which is nearly 32 ns. A square pulse has been chosen, instead of a sinusoidal one, to increase the transmission efficiency and to decrease the tailing effect. The loaded capacitance of the structure was kept <10 pF to achieve fast rise/fall time of the applied voltage signal. A Python code has been developed to verify the various design parameters. The simulation also shows that one can get an efficient deflection of undesired particles resulting in >90% transmission efficiency with in the bunch length. Various simulation codes like Solid Works, TRACE 3D, CST MWS and homebrew Python codes were used to validate the design.

  18. Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas (GPS - TTBP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chame, Jacqueline

    2011-05-27

    The goal of this project is the development of the Gyrokinetic Toroidal Code (GTC) Framework and its applications to problems related to the physics of turbulence and turbulent transport in tokamaks,. The project involves physics studies, code development, noise effect mitigation, supporting computer science efforts, diagnostics and advanced visualizations, verification and validation. Its main scientific themes are mesoscale dynamics and non-locality effects on transport, the physics of secondary structures such as zonal flows, and strongly coherent wave-particle interaction phenomena at magnetic precession resonances. Special emphasis is placed on the implications of these themes for rho-star and current scalings and formore » the turbulent transport of momentum. GTC-TTBP also explores applications to electron thermal transport, particle transport; ITB formation and cross-cuts such as edge-core coupling, interaction of energetic particles with turbulence and neoclassical tearing mode trigger dynamics. Code development focuses on major initiatives in the development of full-f formulations and the capacity to simulate flux-driven transport. In addition to the full-f -formulation, the project includes the development of numerical collision models and methods for coarse graining in phase space. Verification is pursued by linear stability study comparisons with the FULL and HD7 codes and by benchmarking with the GKV, GYSELA and other gyrokinetic simulation codes. Validation of gyrokinetic models of ion and electron thermal transport is pursed by systematic stressing comparisons with fluctuation and transport data from the DIII-D and NSTX tokamaks. The physics and code development research programs are supported by complementary efforts in computer sciences, high performance computing, and data management.« less

  19. Methods of treating complex space vehicle geometry for charged particle radiation transport

    NASA Technical Reports Server (NTRS)

    Hill, C. W.

    1973-01-01

    Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.

  20. Computed secondary-particle energy spectra following nonelastic neutron interactions with sup 12 C for E sub n between 15 and 60 MeV: Comparisons of results from two calculational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.

  1. micrOMEGAs 2.0: A program to calculate the relic density of dark matter in a generic model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.

    2007-03-01

    micrOMEGAs 2.0 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non-supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0 Catalogue identifier:ADQR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers for which the program is designed and others on which it has been tested:PC, Alpha, Mac, Sun Operating systems under which the program has been tested:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) Programming language used:C and Fortran Memory required to execute with typical data:17 MB depending on the number of processes required No. of processors used:1 Has the code been vectorized or parallelized:no No. of lines in distributed program, including test data, etc.:91 778 No. of bytes in distributed program, including test data, etc.:1 306 726 Distribution format:tar.gz External routines/libraries used:no Catalogue identifier of previous version:ADQR_v1_3 Journal reference of previous version:Comput. Phys. Comm. 174 (2006) 577 Does the new version supersede the previous version:yes Nature of physical problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Method of solution: In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. Higher-order QCD corrections to Higgs couplings to quark pairs are included. Reasons for the new version:There are many models of new physics that propose a candidate for dark matter besides the much studied minimal supersymmetric standard model. This new version not only incorporates extensions of the MSSM, such as the MSSM with complex phases, or the NMSSM which contains an extra singlet superfield but also gives the possibility for the user to incorporate easily a new model. For this the user only needs to redefine appropriately a new model file. Summary of revisions:Possibility to include in the package any particle physics model with a discrete symmetry that guarantees the stability of the cold dark matter candidate (LOP) and to compute the relic density of CDM. Compute automatically the cross-sections for annihilation of the LOP at small velocities into SM final states and provide the energy spectra for γ,e,p¯,ν final states. For the MSSM with input parameters defined at the GUT scale, the interface with any of the spectrum calculator codes reads an input file in the SUSY Les Houches Accord format (SLHA). Implementation of the MSSM with complex parameters (CPV-MSSM) with an interface to CPsuperH to calculate the spectrum. Routine to calculate the electric dipole moment of the electron in the CPV-MSSM. In the NMSSM, new interface compatible with NMHDECAY2.1. Typical running time:0.2 sec Unusual features of the program:Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.

  2. Particle Acceleration, Magnetic Field Generation and Emission from Relativistic Jets and Supernova Remnants

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Hartmann, D. H.; Hardee, P.; Hededal, C.; Mizunno, Y.; Fishman, G. J.

    2006-01-01

    We performed numerical simulations of particle acceleration, magnetic field generation, and emission from shocks in order to understand the observed emission from relativistic jets and supernova remnants. The investigation involves the study of collisionless shocks, where the Weibel instability is responsible for particle acceleration as well as magnetic field generation. A 3-D relativistic particle-in-cell (RPIC) code has been used to investigate the shock processes in electron-positron plasmas. The evolution of theWeibe1 instability and its associated magnetic field generation and particle acceleration are studied with two different jet velocities (0 = 2,5 - slow, fast) corresponding to either outflows in supernova remnants or relativistic jets, such as those found in AGNs and microquasars. Slow jets have intrinsically different structures in both the generated magnetic fields and the accelerated particle spectrum. In particular, the jet head has a very weak magnetic field and the ambient electrons are strongly accelerated and dragged by the jet particles. The simulation results exhibit jitter radiation from inhomogeneous magnetic fields, generated by the Weibel instability, which has different spectral properties than standard synchrotron emission in a homogeneous magnetic field.

  3. Shock and Static Compression of Nitrobenzene

    NASA Astrophysics Data System (ADS)

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake

    2000-08-01

    The Hugoniot and static compression curve (isotherm) were investigated using explosive plane wave generators and diamond anvil cells, respectively. The obtained Hugoniot from the shock experiments is represented by two linear lines: Us=2.52+1.23 up (0.8

  4. MMAPDNG: A new, fast code backed by a memory-mapped database for simulating delayed γ-ray emission with MCNPX package

    NASA Astrophysics Data System (ADS)

    Lou, Tak Pui; Ludewigt, Bernhard

    2015-09-01

    The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code "MMAPDNG" (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.

  5. A Computer Code for Fully-Coupled Rocket Nozzle Flows (FULLNOZ)

    DTIC Science & Technology

    1975-04-01

    surface (i.e. each integration It would be useful to incorporate an "initializing" scheme which utilizes comb tstion chamber properties as initial...density is greater than the critical electron density. (During the initial stages of the expansion process , where particle tempera- tures are very high it...34iW to19Cs*4909too xs *d99$900 wool ?* 0. SeFC16, .t) .6?900 1, 3x *,30?%I0 to 41,171 0I. 9"CI ,."v *?’o.9 A3 qhbs99r.oo, v.U118 0.1 ,t It Od Cs Sol-C

  6. Monte Carlo Particle Lists: MCPL

    NASA Astrophysics Data System (ADS)

    Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.

    2017-09-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  7. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

  8. Computational Thermodynamics Analysis of Vaporizing Fuel Droplets in the Human Upper Airways

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Kleinstreuer, Clement

    The detailed knowledge of air flow structures as well as particle transport and deposition in the human lung for typical inhalation flow rates is an important precursor for dosimetry-and-health-effect studies of toxic particles as well as for targeted drug delivery of therapeutic aerosols. Focusing on highly toxic JP-8 fuel aerosols, 3-D airflow and fluid-particle thermodynamics in a human upper airway model starting from mouth to Generation G3 (G0 is the trachea) are simulated using a user-enhanced and experimentally validated finite-volume code. The temperature distributions and their effects on airflow structures, fuel vapor deposition and droplet motion/evaporation are discussed. The computational results show that the thermal effect on vapor deposition is minor, but it may greatly affect droplet deposition in human airways.

  9. IMPETUS: Consistent SPH calculations of 3D spherical Bondi accretion onto a black hole

    NASA Astrophysics Data System (ADS)

    Ramírez-Velasquez, J. M.; Sigalotti, L. Di G.; Gabbasov, R.; Cruz, F.; Klapp, J.

    2018-04-01

    We present three-dimensional calculations of spherically symmetric Bondi accretion onto a stationary supermassive black hole (SMBH) of mass 108M⊙ within a radial range of 0.02 - 10 pc, using a modified version of the smoothed particle hydrodynamics (SPH) GADGET-2 code, which ensures approximate first-order consistency (i.e., second-order accuracy) for the particle approximation. First-order consistency is restored by allowing the number of neighbours, nneigh, and the smoothing length, h, to vary with the total number of particles, N, such that the asymptotic limits nneigh → ∞ and h → 0 hold as N → ∞. The ability of the method to reproduce the isothermal (γ = 1) and adiabatic (γ = 5/3) Bondi accretion is investigated with increased spatial resolution. In particular, for the isothermal models the numerical radial profiles closely match the Bondi solution, except near the accretor, where the density and radial velocity are slightly underestimated. However, as nneigh is increased and h is decreased, the calculations approach first-order consistency and the deviations from the Bondi solution decrease. The density and radial velocity profiles for the adiabatic models are qualitatively similar to those for the isothermal Bondi accretion. Steady-state Bondi accretion is reproduced by the highly resolved consistent models with a percent relative error of ≲ 1% for γ = 1 and ˜9% for γ = 5/3, with the adiabatic accretion taking longer than the isothermal case to reach steady flow. The performance of the method is assessed by comparing the results with those obtained using the standard GADGET-2 and the GIZMO codes.

  10. THERMINATOR: THERMal heavy-IoN generATOR

    NASA Astrophysics Data System (ADS)

    Kisiel, Adam; Tałuć, Tomasz; Broniowski, Wojciech; Florkowski, Wojciech

    2006-04-01

    THERMINATOR is a Monte Carlo event generator designed for studying of particle production in relativistic heavy-ion collisions performed at such experimental facilities as the SPS, RHIC, or LHC. The program implements thermal models of particle production with single freeze-out. It performs the following tasks: (1) generation of stable particles and unstable resonances at the chosen freeze-out hypersurface with the local phase-space density of particles given by the statistical distribution factors, (2) subsequent space-time evolution and decays of hadronic resonances in cascades, (3) calculation of the transverse-momentum spectra and numerous other observables related to the space-time evolution. The geometry of the freeze-out hypersurface and the collective velocity of expansion may be chosen from two successful models, the Cracow single-freeze-out model and the Blast-Wave model. All particles from the Particle Data Tables are used. The code is written in the object-oriented c++ language and complies to the standards of the ROOT environment. Program summaryProgram title:THERMINATOR Catalogue identifier:ADXL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland RAM required to execute with typical data:50 Mbytes Number of processors used:1 Computer(s) for which the program has been designed: PC, Pentium III, IV, or Athlon, 512 MB RAM not hardware dependent (any computer with the c++ compiler and the ROOT environment [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] Operating system(s) for which the program has been designed:Linux: Mandrake 9.0, Debian 3.0, SuSE 9.0, Red Hat FEDORA 3, etc., Windows XP with Cygwin ver. 1.5.13-1 and gcc ver. 3.3.3 (cygwin special)—not system dependent External routines/libraries used: ROOT ver. 4.02.00 Programming language:c++ Size of the package: (324 KB directory 40 KB compressed distribution archive), without the ROOT libraries (see http://root.cern.ch for details on the ROOT [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] requirements). The output files created by the code need 1.1 GB for each 500 events. Distribution format: tar gzip file Number of lines in distributed program, including test data, etc.: 6534 Number of bytes in ditribution program, including test data, etc.:41 828 Nature of the physical problem: Statistical models have proved to be very useful in the description of soft physics in relativistic heavy-ion collisions [P. Braun-Munzinger, K. Redlich, J. Stachel, 2003, nucl-th/0304013. [2

  11. FY17Q4 Ristra project: Release Version 1.0 of a production toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    2017-09-21

    The Next Generation Code project will release Version 1.0 of a production toolkit for multi-physics application development on advanced architectures. Features of this toolkit will include remap and link utilities, control and state manager, setup, visualization and I/O, as well as support for a variety of mesh and particle data representations. Numerical physics packages that operate atop this foundational toolkit will be employed in a multi-physics demonstration problem and released to the community along with results from the demonstration.

  12. A Comprehensive Comparison of Relativistic Particle Integrators

    NASA Astrophysics Data System (ADS)

    Ripperda, B.; Bacchini, F.; Teunissen, J.; Xia, C.; Porth, O.; Sironi, L.; Lapenta, G.; Keppens, R.

    2018-03-01

    We compare relativistic particle integrators commonly used in plasma physics, showing several test cases relevant for astrophysics. Three explicit particle pushers are considered, namely, the Boris, Vay, and Higuera–Cary schemes. We also present a new relativistic fully implicit particle integrator that is energy conserving. Furthermore, a method based on the relativistic guiding center approximation is included. The algorithms are described such that they can be readily implemented in magnetohydrodynamics codes or Particle-in-Cell codes. Our comparison focuses on the strengths and key features of the particle integrators. We test the conservation of invariants of motion and the accuracy of particle drift dynamics in highly relativistic, mildly relativistic, and non-relativistic settings. The methods are compared in idealized test cases, i.e., without considering feedback onto the electrodynamic fields, collisions, pair creation, or radiation. The test cases include uniform electric and magnetic fields, {\\boldsymbol{E}}× {\\boldsymbol{B}} fields, force-free fields, and setups relevant for high-energy astrophysics, e.g., a magnetic mirror, a magnetic dipole, and a magnetic null. These tests have direct relevance for particle acceleration in shocks and in magnetic reconnection.

  13. DOUAR: A new three-dimensional creeping flow numerical model for the solution of geological problems

    NASA Astrophysics Data System (ADS)

    Braun, Jean; Thieulot, Cédric; Fullsack, Philippe; DeKool, Marthijn; Beaumont, Christopher; Huismans, Ritske

    2008-12-01

    We present a new finite element code for the solution of the Stokes and energy (or heat transport) equations that has been purposely designed to address crustal-scale to mantle-scale flow problems in three dimensions. Although it is based on an Eulerian description of deformation and flow, the code, which we named DOUAR ('Earth' in Breton language), has the ability to track interfaces and, in particular, the free surface, by using a dual representation based on a set of particles placed on the interface and the computation of a level set function on the nodes of the finite element grid, thus ensuring accuracy and efficiency. The code also makes use of a new method to compute the dynamic Delaunay triangulation connecting the particles based on non-Euclidian, curvilinear measure of distance, ensuring that the density of particles remains uniform and/or dynamically adapted to the curvature of the interface. The finite element discretization is based on a non-uniform, yet regular octree division of space within a unit cube that allows efficient adaptation of the finite element discretization, i.e. in regions of strong velocity gradient or high interface curvature. The finite elements are cubes (the leaves of the octree) in which a q1- p0 interpolation scheme is used. Nodal incompatibilities across faces separating elements of differing size are dealt with by introducing linear constraints among nodal degrees of freedom. Discontinuities in material properties across the interfaces are accommodated by the use of a novel method (which we called divFEM) to integrate the finite element equations in which the elemental volume is divided by a local octree to an appropriate depth (resolution). A variety of rheologies have been implemented including linear, non-linear and thermally activated creep and brittle (or plastic) frictional deformation. A simple smoothing operator has been defined to avoid checkerboard oscillations in pressure that tend to develop when using a highly irregular octree discretization and the tri-linear (or q1- p0) finite element. A three-dimensional cloud of particles is used to track material properties that depend on the integrated history of deformation (the integrated strain, for example); its density is variable and dynamically adapted to the computed flow. The large system of algebraic equations that results from the finite element discretization and linearization of the basic partial differential equations is solved using a multi-frontal massively parallel direct solver that can efficiently factorize poorly conditioned systems resulting from the highly non-linear rheology and the presence of the free surface. The code is almost entirely parallelized. We present example results including the onset of a Rayleigh-Taylor instability, the indentation of a rigid-plastic material and the formation of a fold beneath a free eroding surface, that demonstrate the accuracy, efficiency and appropriateness of the new code to solve complex geodynamical problems in three dimensions.

  14. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles A. Wemple; Joshua J. Cogliati

    2005-04-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random numbermore » generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.« less

  15. Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM

    NASA Astrophysics Data System (ADS)

    de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl

    2002-03-01

    We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.

  16. Channeling through Bent Crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mack, Stephanie; /Ottawa U. /SLAC

    2012-09-07

    Bent crystals have demonstrated potential for use in beam collimation. A process called channeling is when accelerated particle beams are trapped by the nuclear potentials in the atomic planes within a crystal lattice. If the crystal is bent then the particles can follow the bending angle of the crystal. There are several different effects that are observed when particles travel through a bent crystal including dechanneling, volume capture, volume reflection and channeling. With a crystal placed at the edge of a particle beam, part of the fringe of the beam can be deflected away towards a detector or beam dump,more » thus helping collimate the beam. There is currently FORTRAN code by Igor Yazynin that has been used to model the passage of particles through a bent crystal. Using this code, the effects mentioned were explored for beam energy that would be seen at the Facility for Advanced Accelerator Experimental Tests (FACET) at a range of crystal orientations with respect to the incoming beam. After propagating 5 meters in vacuum space past the crystal the channeled particles were observed to separate from most of the beam with some noise due to dechanneled particles. Progressively smaller bending radii, with corresponding shorter crystal lengths, were compared and it was seen that multiple scattering decreases with the length of the crystal therefore allowing for cleaner detection of the channeled particles. The input beam was then modified and only a portion of the beam sent through the crystal. With the majority of the beam not affected by the crystal, most particles were not deflected and after propagation the channeled particles were seen to be deflected approximately 5mm. After a portion of the beam travels through the crystal, the entire beam was then sent through a quadrupole magnet, which increased the separation of the channeled particles from the remainder of the beam to a distance of around 20mm. A different code, which was developed at SLAC, was used to create an angular profile plot which was compared to what was produced by Yazynin's code for a beam with no multiple scattering. The results were comparable, with volume reflection and channeling effects observed and the range of crystal orientations at which volume reflection is seen was about 1 mrad in both simulations.« less

  17. NTRFACE for MAGIC

    DTIC Science & Technology

    1989-07-31

    40. NO NO ACCESSION NO N7 ?I TITLE (inWijuod Security Claisification) NTRFACE FOR MAGIC 𔃼 PERSONAL AUTHOR(S) N.T. GLADD PE OF REPORT T b TIME...the MAGIC Particle-in-Cell Simulation Code. 19 ABSTRACT (Contianue on reverse if nceary and d ntiy by block number) The NTRFACE system was developed...made concret by applying it to a specific application- a mature, highly complex plasma physics particle in cell simulation code name MAGIC . This

  18. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  19. Hybrid petacomputing meets cosmology: The Roadrunner Universe project

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Lukić, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James

    2009-07-01

    The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.

  20. The Simpsons program 6-D phase space tracking with acceleration

    NASA Astrophysics Data System (ADS)

    Machida, S.

    1993-12-01

    A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.

  1. Edge-core interaction of ITG turbulence in Tokamaks: Is the Tail Wagging the Dog?

    NASA Astrophysics Data System (ADS)

    Ku, S.; Chang, C. S.; Dif-Pradalier, G.; Diamond, P. H.

    2010-11-01

    A full-f XGC1 gyrokinetic simulation of ITG turbulence, together with the neoclassical dynamics without scale separation, has been performed for the whole-volume plasma in realistic diverted DIII-D geometry. The simulation revealed that the global structure of the turbulence and transport in tokamak plasmas results from a synergy between edge-driven inward propagation of turbulence intensity and the core-driven outward heat transport. The global ion confinement and the ion temperature gradient then self-organize quickly at turbulence propagation time scale. This synergy results in inward-outward pulse scattering leading to spontaneous production of strong internal shear layers in which the turbulent transport is almost suppressed over several radial correlation lengths. Co-existence of the edge turbulence source and the strong internal shear layer leads to radially increasing turbulence intensity and ion thermal transport profiles.

  2. PYFLOW 2.0. A new open-source software for quantifying the impact and depositional properties of dilute pyroclastic density currents

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2017-04-01

    Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46

  3. Beam dynamics simulation of HEBT for the SSC-linac injector

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Ni; Yuan, You-Jin; Xiao, Chen; He, Yuan; Wang, Zhi-Jun; Sheng, Li-Na

    2012-11-01

    The SSC-linac (a new injector for the Separated Sector Cyclotron) is being designed in the HIRFL (Heavy Ion Research Facility in Lanzhou) system to accelerate 238U34+ from 3.72 keV/u to 1.008 MeV/u. As a part of the SSC-linac injector, the HEBT (high energy beam transport) has been designed by using the TRACE-3D code and simulated by the 3D PIC (particle-in-cell) Track code. The total length of the HEBT is about 12 meters and a beam line of about 6 meters are shared with the exiting beam line of the HIRFL system. The simulation results show that the particles can be delivered efficiently in the HEBT and the particles at the exit of the HEBT well match the acceptance of the SSC for further acceleration. The dispersion is eliminated absolutely in the HEBT. The space-charge effect calculated by the Track code is inconspicuous. According to the simulation, more than 60 percent of the particles from the ion source can be transported into the acceptance of the SSC.

  4. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  5. Assessing MMOD Impacts on Seal Performance

    NASA Technical Reports Server (NTRS)

    deGroh, Henry C., III; Daniels, C.; Dunlap, P.; Steinetz, B.

    2007-01-01

    The elastomer seal needed to seal in cabin air when NASA s Crew Exploration Vehicle is docked is exposed to space prior to docking. While open to space, the seal might be hit by orbital debris or meteoroids. The likelihood of damage of this type depends on the size of the particle. Our campaign is designed to find the smallest particle that will cause seal failure resulting in loss of mission. We will then be able to estimate environmental risks to the seal. Preliminary tests indicate seals can withstand a surprising amount of damage and still function. Collaborations with internal and external partners are in place and include seal leak testing, modeling of the space environment using a computer code known as BUMPER, and hypervelocity impact (HVI) studies at Caltech. Preliminary work at White Sands Test Facility showed a 0.5 mm diameter HVI damaged areas about 7 times that diameter, boring deep (5 mm) into elastomer specimens. BUMPER simulations indicate there is a 1 in 1440 chance of getting hit by a particle of diameter 0.08 cm for current Lunar missions; and 0.27 cm for a 10 year ISS LIDS seal area exposure.

  6. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  7. Fast particles in a steady-state compact FNS and compact ST reactor

    NASA Astrophysics Data System (ADS)

    Gryaznevich, M. P.; Nicolai, A.; Buxton, P.

    2014-10-01

    This paper presents results of studies of fast particles (ions and alpha particles) in a steady-state compact fusion neutron source (CFNS) and a compact spherical tokamak (ST) reactor with Monte-Carlo and Fokker-Planck codes. Full-orbit simulations of fast particle physics indicate that a compact high field ST can be optimized for energy production by a reduction of the necessary (for the alpha containment) plasma current compared with predictions made using simple analytic expressions, or using guiding centre approximation in a numerical code. Alpha particle losses may result in significant heating and erosion of the first wall, so such losses for an ST pilot plant have been calculated and total and peak wall loads dependence on the plasma current has been studied. The problem of dilution has been investigated and results for compact and big size devices are compared.

  8. TRAX-CHEM: A pre-chemical and chemical stage extension of the particle track structure code TRAX in water targets

    NASA Astrophysics Data System (ADS)

    Boscolo, D.; Krämer, M.; Durante, M.; Fuss, M. C.; Scifoni, E.

    2018-04-01

    The production, diffusion, and interaction of particle beam induced water-derived radicals is studied with the a pre-chemical and chemical module of the Monte Carlo particle track structure code TRAX, based on a step by step approach. After a description of the model implemented, the chemical evolution of the most important products of water radiolysis is studied for electron, proton, helium, and carbon ion radiation at different energies. The validity of the model is verified by comparing the calculated time and LET dependent yield with experimental data from literature and other simulation approaches.

  9. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumann, K; Weber, U; Simeonov, Y

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular andmore » thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.« less

  10. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillo, Andrew; Ricketts, Craig I.

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacitymore » of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)« less

  11. Constraints on particle density evolution within a CME at Mercury

    NASA Astrophysics Data System (ADS)

    Exner, W.; Liuzzo, L.; Heyner, D.; Feyerabend, M.; Motschmann, U. M.; Glassmeier, K. H.; Shiota, D.; Kusano, K.

    2017-12-01

    Mercury (RM=2440) is the closest orbiting planet around the Sun and is embedded in an intensive and highly varying solar wind.Mercury's intrinsic dipole with a southward magnetic moment is aligned with the rotation axis and has a northward offset of 0.2 RM.In-situ data from the MESSENGER spacecraft of the magnetic environment near Mercury indicate that a coronal mass ejection (CME) passed the planet on 8 May 2012. The data constrain the direction and magnitude of the CME magnetic field but no information on its particle density could be determined.We apply the hybrid (kinetic ions, electron fluid) code A.I.K.E.F. to study the interaction of Mercury's magnetosphere with the CME.We use MESSENGER magnetic field observations as well as simulation results to constrain the evolution of the particle density inside the CME.We show that within a 24-hour period the particle density within the CME had to vary between 1-100 cm-3 in order to explain MESSENGER magnetic field observations.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sublet, J.-Ch., E-mail: jean-christophe.sublet@ukaea.uk; Eastwood, J.W.; Morgan, J.G.

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2more » and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.« less

  13. FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.

    2017-01-01

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.

  14. Fission time scale from pre-scission neutron and α multiplicities in the 16O + 194Pt reaction

    NASA Astrophysics Data System (ADS)

    Kapoor, K.; Verma, S.; Sharma, P.; Mahajan, R.; Kaur, N.; Kaur, G.; Behera, B. R.; Singh, K. P.; Kumar, A.; Singh, H.; Dubey, R.; Saneesh, N.; Jhingan, A.; Sugathan, P.; Mohanto, G.; Nayak, B. K.; Saxena, A.; Sharma, H. P.; Chamoli, S. K.; Mukul, I.; Singh, V.

    2017-11-01

    Pre- and post-scission α -particle multiplicities have been measured for the reaction 16O+P194t at 98.4 MeV forming R210n compound nucleus. α particles were measured at various angles in coincidence with the fission fragments. Moving source technique was used to extract the pre- and post-scission contributions to the particle multiplicity. Study of the fission mechanism using the different probes are helpful in understanding the detailed reaction dynamics. The neutron multiplicities for this reaction have been reported earlier. The multiplicities of neutrons and α particles were reproduced using standard statistical model code joanne2 by varying the transient (τt r) and saddle to scission (τs s c) times. This code includes deformation dependent-particle transmission coefficients, binding energies and level densities. Fission time scales of the order of 50-65 ×10-21 s are required to reproduce the neutron and α -particle multiplicities.

  15. Megaquakes, prograde surface waves and urban evolution

    NASA Astrophysics Data System (ADS)

    Lomnitz, C.; Castaños, H.

    2013-05-01

    Cities grow according to evolutionary principles. They move away from soft-ground conditions and avoid vulnerable types of structures. A megaquake generates prograde surface waves that produce unexpected damage in modern buildings. The examples (Figs. 1 and 2) were taken from the 1985 Mexico City and the 2010 Concepción, Chile megaquakes. About 400 structures built under supervision according to modern building codes were destroyed in the Mexican earthquake. All were sited on soft ground. A Rayleigh wave will cause surface particles to move as ellipses in a vertical plane. Building codes assume that this motion will be retrograde as on a homogeneous elastic halfspace, but soft soils are intermediate materials between a solid and a liquid. When Poisson's ratio tends to ν→0.5 the particle motion turns prograde as it would on a homogeneous fluid halfspace. Building codes assume that the tilt of the ground is not in phase with the acceleration but we show that structures on soft ground tilt into the direction of the horizontal ground acceleration. The combined effect of gravity and acceleration may destabilize a structure when it is in resonance with its eigenfrequency. Castaños, H. and C. Lomnitz, 2013. Charles Darwin and the 1835 Chile earthquake. Seismol. Res. Lett., 84, 19-23. Lomnitz, C., 1990. Mexico 1985: the case for gravity waves. Geophys. J. Int., 102, 569-572. Malischewsky, P.G. et al., 2008. The domain of existence of prograde Rayleigh-wave particle motion. Wave Motion 45, 556-564.; Figure 1 1985 Mexico megaquake--overturned 15-story apartment building in Mexico City ; Figure 2 2010 Chile megaquake Overturned 15-story R-C apartment building in Concepción

  16. A comparison of total reaction cross section models used in particle and heavy ion transport codes

    NASA Astrophysics Data System (ADS)

    Sihver, Lembit; Lantz, M.; Takechi, M.; Kohama, A.; Ferrari, A.; Cerutti, F.; Sato, T.

    To be able to calculate the nucleon-nucleus and nucleus-nucleus total reaction cross sections with precision is very important for studies of basic nuclear properties, e.g. nuclear structure. This is also of importance for particle and heavy ion transport calculations because, in all particle and heavy ion transport codes, the probability function that a projectile particle will collide within a certain distance x in the matter depends on the total reaction cross sections. Furthermore, the total reaction cross sections will also scale the calculated partial fragmentation cross sections. It is therefore crucial that accurate total reaction cross section models are used in the transport calculations. In this paper, different models for calculating nucleon-nucleus and nucleus-nucleus total reaction cross sections are compared and discussed.

  17. Stress Wave Interactions with Tunnels Buried in Well-Characterized Jointed Media.

    DTIC Science & Technology

    1980-06-01

    27 14 Particle Velocity and Principal Stress Fields at 62 jisec for the Elastic- Plastic Media Model (Case 1, 0.8 kbar...is used; the basic formulation is similar to the HEMP code (Ref. 3) . Tn numerical solutions and material properties are luscriben in Section 3. 3...media is 16A rock simulant. The elastic- plastic properties are modeled with the following parameters: Bulk Modulus K = .131 Mbar Shear Modulus G

  18. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  19. Global linear gyrokinetic particle-in-cell simulations including electromagnetic effects in shaped plasmas

    NASA Astrophysics Data System (ADS)

    Mishchenko, A.; Borchardt, M.; Cole, M.; Hatzky, R.; Fehér, T.; Kleiber, R.; Könies, A.; Zocco, A.

    2015-05-01

    We give an overview of recent developments in electromagnetic simulations based on the gyrokinetic particle-in-cell codes GYGLES and EUTERPE. We present the gyrokinetic electromagnetic models implemented in the codes and discuss further improvements of the numerical algorithm, in particular the so-called pullback mitigation of the cancellation problem. The improved algorithm is employed to simulate linear electromagnetic instabilities in shaped tokamak and stellarator plasmas, which was previously impossible for the parameters considered.

  20. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    NASA Astrophysics Data System (ADS)

    Iwamoto, Yosuke; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for 72Ge, 75As, 89Y, and 109Ag in the ENDF/B-VII.1 library, and for 90Zr and 55Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  1. Effect of injection velocity and particle concentration on transport of nanoscale zero-valent iron and hydraulic conductivity in saturated porous media

    NASA Astrophysics Data System (ADS)

    Strutz, Tessa J.; Hornbruch, Götz; Dahmke, Andreas; Köber, Ralf

    2016-08-01

    Successful groundwater remediation by injecting nanoscale zero-valent iron (NZVI) particles requires efficient particle transportation and distribution in the subsurface. This study focused on the influence of injection velocity and particle concentration on the spatial NZVI particle distribution, the deposition processes and on quantifying the induced decrease in hydraulic conductivity (K) as a result of particle retention by lab tests and numerical simulations. Horizontal column tests of 2 m length were performed with initial Darcy injection velocities (q0) of 0.5, 1.5, and 4.1 m/h and elemental iron input concentrations (Fe0in) of 0.6, 10, and 17 g/L. Concentrations of Fe0 in the sand were determined by magnetic susceptibility scans, which provide detailed Fe0 distribution profiles along the column. NZVI particles were transported farther at higher injection velocity and higher input concentrations. K decreased by one order of magnitude during injection in all experiments, with a stronger decrease after reaching Fe0 concentrations of about 14-18 g/kg(sand). To simulate the observed nanoparticle transport behavior the existing finite-element code OGS has been successfully extended and parameterized for the investigated experiments using blocking, ripening, and straining as governing deposition processes. Considering parameter relationships deduced from single simulations for each experiment (e.g. deposition rate constants as a function of flow velocity) one mean parameter set has been generated reproducing the observations in an adequate way for most cases of the investigated realistic injection conditions. An assessment of the deposition processes related to clogging effects showed that the percentage of retention due to straining and ripening increased during experimental run time resulting in an ongoing reduction of K. Clogging is mainly evoked by straining which dominates particle deposition at higher flow velocities, while blocking and ripening play a significant role for attachment, mainly at lower injection velocities. Since the injection of fluids at real sites leads to descending flow velocities with increasing radial distance from the injection point, the simulation of particle transport requires accounting for all deposition processes mentioned above. Thus, the derived mean parameter set can be used as a basis for quantitative and predictive simulations of particle distributions and clogging effects at both lab and field scale. Since decreases in K can change the flow system, which may have positive as well as negative implications for the in situ remediation technology at a contaminated site, a reliable simulation is thus of great importance for NZVI injection and prediction.

  2. NANO-PARTICLE TRANSPORT AND DEPOSITION IN BIFURCATING TUBES WITH DIFFERENT INLET CONDITIONS

    EPA Science Inventory

    Transport and deposition of ultrafine particles in straight, bend and bifurcating tubes are considered for different inlet Reynolds numbers, velocity profiles, and particle sizes i.e., 1 nm= =150 nm. A commercial finite-volume code with user-supplied programs was validated with a...

  3. Geometry Calibration of the SVT in the CLAS12 Detector

    NASA Astrophysics Data System (ADS)

    Davies, Peter; Gilfoyle, Gerard

    2016-09-01

    A new detector called CLAS12 is being built in Hall B as part of the 12 GeV Upgrade at Jefferson Lab to learn how quarks and gluons form nuclei. The Silicon Vertex Tracker (SVT) is one of the subsystems designed to track the trajectory of charged particles as they are emitted from the target at large angles. The sensors of the SVT consist of long, narrow, strips embedded in a silicon substrate. There are 256 strips in a sensor, with a stereo angle of 0 -3° degrees. The location of the strips must be known to a precision of a few microns in order to accurately reconstruct particle tracks with the required resolution of 50-60 microns. Our first step toward achieving this resolution was to validate the nominal geometry relative to the design specification. We also resolved differences between the design and the CLAS12, Geant4-based simulation code GEMC. We developed software to apply alignment shifts to the nominal design geometry from a survey of fiducial points on the structure that supports each sensor. The final geometry will be generated by a common package written in JAVA to ensure consistency between the simulation and Reconstruction codes. The code will be tested by studying the impact of known distortions of the nominal geometry in simulation. Work supported by the Univeristy of Richmond and the US Department of Energy.

  4. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  5. Analysis of Physical Properties of Dust Suspended in the Mars Atmosphere

    NASA Technical Reports Server (NTRS)

    Snook, Kelly; McKay, Chris; Cantwell, Brian

    1998-01-01

    Methods for iteratively determining the infrared optical constants for dust suspended in the Mars atmosphere are described. High quality spectra for wavenumbers from 200 to 2000 1/cm were obtained over a wide range of view angles by the Mariner 9 spacecraft, when it observed a global Martian dust storm in 1971-2. In this research, theoretical spectra of the emergent intensity from Martian dust clouds are generated using a 2-stream source-function radiative transfer code. The code computes the radiation field in a plane-parallel, vertically homogeneous, multiply scattering atmosphere. Calculated intensity spectra are compared with the actual spacecraft data to iteratively retrieve the optical properties and opacity of the dust, as well as the surface temperature of Mars at the time and location of each measurement. Many different particle size distributions a-re investigated to determine the best fit to the data. The particles are assumed spherical and the temperature profile was obtained from the CO2 band shape. Given a reasonable initial guess for the indices of refraction, the searches converge in a well-behaved fashion, producing a fit with error of less than 1.2 K (rms) to the observed brightness spectra. The particle size distribution corresponding to the best fit was a lognormal distribution with a mean particle radius, r(sub m) 0.66 pm, and variance, omega(sup 2) = 0.412 (r(sub eff) = 1.85 microns, v(sub eff) =.51), in close agreement with the size distribution found to be the best fit in the visible wavelengths in recent studies. The optical properties and the associated single scattering properties are shown to be a significant improvement over those used in existing models by demonstrating the effects of the new properties both on heating rates of the Mars atmosphere and in example spectral retrieval of surface characteristics from emission spectra.

  6. Calculation of dose contributions of electron and charged heavy particles inside phantoms irradiated by monoenergetic neutron.

    PubMed

    Satoh, Daiki; Takahashi, Fumiaki; Endo, Akira; Ohmachi, Yasushi; Miyahara, Nobuyuki

    2008-09-01

    The radiation-transport code PHITS with an event generator mode has been applied to analyze energy depositions of electrons and charged heavy particles in two spherical phantoms and a voxel-based mouse phantom upon neutron irradiation. The calculations using the spherical phantoms quantitatively clarified the type and energy of charged particles which are released through interactions of neutrons with the phantom elements and contribute to the radiation dose. The relative contribution of electrons increased with an increase in the size of the phantom and with a decrease in the energy of the incident neutrons. Calculations with the voxel-based mouse phantom for 2.0-MeV neutron irradiation revealed that the doses to different locations inside the body are uniform, and that the energy is mainly deposited by recoil protons. The present study has demonstrated that analysis using PHITS can yield dose distributions that are accurate enough for RBE evaluation.

  7. Modeling of a Turbofan Engine with Ice Crystal Ingestion in the NASA Propulsion System Laboratory

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Jorgenson, Philip C. E.; Jones, Scott M.; Nili, Samaun

    2017-01-01

    The main focus of this study is to apply a computational tool for the flow analysis of the turbine engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center. The PSL has been used to test a highly instrumented Honeywell ALF502R-5A (LF11) turbofan engine at simulated altitude operating conditions. Test data analysis with an engine cycle code and a compressor flow code was conducted to determine the values of key icing parameters, that can indicate the risk of ice accretion, which can lead to engine rollback (un-commanded loss of engine thrust). The full engine aerothermodynamic performance was modeled with the Honeywell Customer Deck specifically created for the ALF502R-5A engine. The mean-line compressor flow analysis code, which includes a code that models the state of the ice crystal, was used to model the air flow through the fan-core and low pressure compressor. The results of the compressor flow analyses included calculations of the ice-water flow rate to air flow rate ratio (IWAR), the local static wet bulb temperature, and the particle melt ratio throughout the flow field. It was found that the assumed particle size had a large effect on the particle melt ratio, and on the local wet bulb temperature. In this study the particle size was varied parametrically to produce a non-zero calculated melt ratio in the exit guide vane (EGV) region of the low pressure compressor (LPC) for the data points that experienced a growth of blockage there, and a subsequent engine called rollback (CRB). At data points where the engine experienced a CRB having the lowest wet bulb temperature of 492 degrees Rankine at the EGV trailing edge, the smallest particle size that produced a non-zero melt ratio (between 3 percent - 4 percent) was on the order of 1 micron. This value of melt ratio was utilized as the target for all other subsequent data points analyzed, while the particle size was varied from 1 micron - 9.5 microns to achieve the target melt ratio. For data points that did not experience a CRB which had static wet bulb temperatures in the EGV region below 492 degrees Rankine, a non-zero melt ratio could not be achieved even with a 1 micron ice particle size. The highest value of static wet bulb temperature for data points that experienced engine CRB was 498 degrees Rankine with a particle size of 9.5 microns. Based on this study of the LF11 engine test data, the range of static wet bulb temperature at the EGV exit for engine CRB was in the narrow range of 492 degrees Rankine - 498 degrees Rankine , while the minimum value of IWAR was 0.002. The rate of blockage growth due to ice accretion and boundary layer growth was estimated by scaling from a known blockage growth rate that was determined in a previous study. These results obtained from the LF11 engine analysis formed the basis of a unique “icing wedge.”

  8. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less

  9. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  10. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Overview of FAR-TECH's magnetic fusion energy research

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Soo; Bogatu, I. N.; Galkin, S. A.; Spencer, J. Andrew; Svidzinski, V. A.; Zhao, L.

    2017-10-01

    FAR-TECH, Inc. has been working on magnetic fusion energy research over two-decades. During the years, we have developed unique approaches to help understanding the physics, and resolving issues in magnetic fusion energy. The specific areas of work have been in modeling RF waves in plasmas, MHD modeling and mode-identification, and nano-particle plasma jet and its application to disruption mitigation. Our research highlights in recent years will be presented with examples, specifically, developments of FullWave (Full Wave RF code), PMARS (Parallelized MARS code), and HEM (Hybrid ElectroMagnetic code). In addition, nano-particle plasma-jet (NPPJ) and its application for disruption mitigation will be presented. Work is supported by the U.S. DOE SBIR program.

  12. Modeling Cell and Tumor-Metastasis Dosimetry with the Particle and Heavy Ion Transport Code System (PHITS) Software for Targeted Alpha-Particle Radionuclide Therapy.

    PubMed

    Lee, Dongyoul; Li, Mengshi; Bednarz, Bryan; Schultz, Michael K

    2018-06-26

    The use of targeted radionuclide therapy for cancer is on the rise. While beta-particle-emitting radionuclides have been extensively explored for targeted radionuclide therapy, alpha-particle-emitting radionuclides are emerging as effective alternatives. In this context, fundamental understanding of the interactions and dosimetry of these emitted particles with cells in the tumor microenvironment is critical to ascertaining the potential of alpha-particle-emitting radionuclides. One important parameter that can be used to assess these metrics is the S-value. In this study, we characterized several alpha-particle-emitting radionuclides (and their associated radionuclide progeny) regarding S-values in the cellular and tumor-metastasis environments. The Particle and Heavy Ion Transport code System (PHITS) was used to obtain S-values via Monte Carlo simulation for cell and tumor metastasis resulting from interactions with the alpha-particle-emitting radionuclides, lead-212 ( 212 Pb), actinium-225 ( 225 Ac) and bismuth-213 ( 213 Bi); these values were compared to the beta-particle-emitting radionuclides yttrium-90 ( 90 Y) and lutetium-177 ( 177 Lu) and an Auger-electron-emitting radionuclide indium-111 ( 111 In). The effect of cellular internalization on S-value was explored at increasing degree of internalization for each radionuclide. This aspect of S-value determination was further explored in a cell line-specific fashion for six different cancer cell lines based on the cell dimensions obtained by confocal microscopy. S-values from PHITS were in good agreement with MIRDcell S-values (cellular S-values) and the values found by Hindié et al. (tumor S-values). In the cellular model, 212 Pb and 213 Bi decay series produced S-values that were 50- to 120-fold higher than 177 Lu, while 225 Ac decay series analysis suggested S-values that were 240- to 520-fold higher than 177 Lu. S-values arising with 100% cellular internalization were two- to sixfold higher for the nucleus when compared to 0% internalization. The tumor dosimetry model defines the relative merit of radionuclides and suggests alpha particles may be effective for large tumors as well as small tumor metastases. These results from PHITS modeling substantiate emerging evidence that alpha-particle-emitting radionuclides may be an effective alternative to beta-particle-emitting radionuclides for targeted radionuclide therapy due to preferred dose-deposition profiles in the cellular and tumor metastasis context. These results further suggest that internalization of alpha-particle-emitting radionuclides via radiolabeled ligands may increase the relative biological effectiveness of radiotherapeutics.

  13. Directionality of Flare-Accelerated Particles from γ -ray Lines

    NASA Astrophysics Data System (ADS)

    Share, G. H.; Murphy, R. J.

    2000-05-01

    The energies and widths of γ -ray lines emitted by ambient nuclei excited by flare-accelerated protons and α -particles provide information on their directionality, spectra, and on the uniformity of the interaction region. For example, the γ -rays observed from a downward beam of particles impacting at 0o heliocentric angle would exhibit a clear Doppler red-shift and some broadening, dependent on the spectrum of the particles. In contrast, γ -rays observed from the same beam of particles impacting at 90o would be neither observably shifted nor broadened. We have studied the energies and widths of strong lines from de-excitations of 20Ne, 12C, and 16O in solar flares as a function of heliocentric angle. We use spectra from 21 flares observed with NASA's Solar Maximum Mission/GRS and Compton Observatory/OSSE experiments. The line energies of all three nuclei exhibit ~0.9% red-shifts from their laboratory values for flares observed at heliocentric angles <40o. In contrast, the energies are not significantly shifted for flares observed at angles >80o. The lines at all heliocentric angles are broadened between ~2.5% to 4%. These results are suggestive of a broad downward distribution of accelerated particles in flares or an isotropic distribution in a medium that has a significant density gradient. Detailed comparisons of these data with results from the gamma-ray production code (Ramaty, et al. 1979, ApJS, 40, 487; Murphy, et al. 1991, ApJ, 371, 793) are required in order to place constraints on the angular distributions of particles. This research has been supported by NASA grant W-18995.

  14. Eulerian and Lagrangian Plasma Jet Modeling for the Plasma Liner Experiment

    NASA Astrophysics Data System (ADS)

    Hatcher, Richard; Cassibry, Jason; Stanic, Milos; Loverich, John; Hakim, Ammar

    2011-10-01

    The Plasma Liner Experiment (PLX) aims to demonstrate the feasibility of using spherically-convergent plasma jets to from an imploding plasma liner. Our group has modified two hydrodynamic simulation codes to include radiative loss, tabular equations of state (EOS), and thermal transport. Nautilus, created by TechX Corporation, is a finite-difference Eulerian code which solves the MHD equations formulated as systems of hyperbolic conservation laws. The other is SPHC, a smoothed particle hydrodynamics code produced by Stellingwerf Consulting. Use of the Lagrangian fluid particle approach of SPH is motivated by the ability to accurately track jet interfaces, the plasma vacuum boundary, and mixing of various layers, but Eulerian codes have been in development for much longer and have better shock capturing. We validate these codes against experimental measurements of jet propagation, expansion, and merging of two jets. Precursor jets are observed to form at the jet interface. Conditions that govern evolution of two and more merging jets are explored.

  15. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  16. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  17. Particle-gas dynamics in the protoplanetary nebula

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.

    1991-01-01

    In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.

  18. Particle bed reactor modeling

    NASA Technical Reports Server (NTRS)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    1993-01-01

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  19. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  20. PlasmaPy: initial development of a Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas; Leonard, Andrew J.; Stańczak, Dominik; Haggerty, Colby C.; Parashar, Tulasi N.; Huang, Yu-Min; PlasmaPy Community

    2017-10-01

    We report on initial development of PlasmaPy: an open source community-driven Python package for plasma physics. PlasmaPy seeks to provide core functionality that is needed for the formation of a fully open source Python ecosystem for plasma physics. PlasmaPy prioritizes code readability, consistency, and maintainability while using best practices for scientific computing such as version control, continuous integration testing, embedding documentation in code, and code review. We discuss our current and planned capabilities, including features presently under development. The development roadmap includes features such as fluid and particle simulation capabilities, a Grad-Shafranov solver, a dispersion relation solver, atomic data retrieval methods, and tools to analyze simulations and experiments. We describe several ways to contribute to PlasmaPy. PlasmaPy has a code of conduct and is being developed under a BSD license, with a version 0.1 release planned for 2018. The success of PlasmaPy depends on active community involvement, so anyone interested in contributing to this project should contact the authors. This work was partially supported by the U.S. Department of Energy.

  1. Recent Improvements of Particle and Heavy Ion Transport code System: PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Iwamoto, Yosuke; Hashimoto, Shintaro; Ogawa, Tatsuhiko; Furuta, Takuya; Abe, Shin-ichiro; Kai, Takeshi; Matsuda, Norihiro; Okumura, Keisuke; Kai, Tetsuya; Iwase, Hiroshi; Sihver, Lembit

    2017-09-01

    The Particle and Heavy Ion Transport code System, PHITS, has been developed under the collaboration of several research institutes in Japan and Europe. This system can simulate the transport of most particles with energy levels up to 1 TeV (per nucleon for ion) using different nuclear reaction models and data libraries. More than 2,500 registered researchers and technicians have used this system for various applications such as accelerator design, radiation shielding and protection, medical physics, and space- and geo-sciences. This paper summarizes the physics models and functions recently implemented in PHITS, between versions 2.52 and 2.88, especially those related to source generation useful for simulating brachytherapy and internal exposures of radioisotopes.

  2. Track-structure simulations for charged particles.

    PubMed

    Dingfelder, Michael

    2012-11-01

    Monte Carlo track-structure simulations provide a detailed and accurate picture of radiation transport of charged particles through condensed matter of biological interest. Liquid water serves as a surrogate for soft tissue and is used in most Monte Carlo track-structure codes. Basic theories of radiation transport and track-structure simulations are discussed and differences compared to condensed history codes highlighted. Interaction cross sections for electrons, protons, alpha particles, and light and heavy ions are required input data for track-structure simulations. Different calculation methods, including the plane-wave Born approximation, the dielectric theory, and semi-empirical approaches are presented using liquid water as a target. Low-energy electron transport and light ion transport are discussed as areas of special interest.

  3. The "trapped fraction" and interfacial jumps of concentration in fission products release from coated fuel particles

    NASA Astrophysics Data System (ADS)

    Ivanov, A. S.; Rusinkevich, A. A.; Taran, M. D.

    2018-01-01

    The FP Kinetics computer code [1] designed for calculation of fission products release from HTGR coated fuel particles was modified to allow consideration of chemical bonding, effects of limited solubility and component concentration jumps at interfaces between coating layers. Curves of Cs release from coated particles calculated with the FP Kinetics and PARFUME [2] codes were compared. It has been found that the consideration of concentration jumps at silicon carbide layer interfaces allows giving an explanation of some experimental data on Cs release obtained from post-irradiation heating tests. The need to perform experiments for measurement of solubility limits in coating materials was noted.

  4. Helium ions at the heidelberg ion beam therapy center: comparisons between FLUKA Monte Carlo code predictions and dosimetric measurements

    NASA Astrophysics Data System (ADS)

    Tessonnier, T.; Mairani, A.; Brons, S.; Sala, P.; Cerutti, F.; Ferrari, A.; Haberer, T.; Debus, J.; Parodi, K.

    2017-08-01

    In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.

  5. Helium ions at the heidelberg ion beam therapy center: comparisons between FLUKA Monte Carlo code predictions and dosimetric measurements.

    PubMed

    Tessonnier, T; Mairani, A; Brons, S; Sala, P; Cerutti, F; Ferrari, A; Haberer, T; Debus, J; Parodi, K

    2017-08-01

    In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4 He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.

  6. Finite-element 3D simulation tools for high-current relativistic electron beams

    NASA Astrophysics Data System (ADS)

    Humphries, Stanley; Ekdahl, Carl

    2002-08-01

    The DARHT second-axis injector is a challenge for computer simulations. Electrons are subject to strong beam-generated forces. The fields are fully three-dimensional and accurate calculations at surfaces are critical. We describe methods applied in OmniTrak, a 3D finite-element code suite that can address DARHT and the full range of charged-particle devices. The system handles mesh generation, electrostatics, magnetostatics and self-consistent particle orbits. The MetaMesh program generates meshes of conformal hexahedrons to fit any user geometry. The code has the unique ability to create structured conformal meshes with cubic logic. Organized meshes offer advantages in speed and memory utilization in the orbit and field solutions. OmniTrak is a versatile charged-particle code that handles 3D electric and magnetic field solutions on independent meshes. The program can update both 3D field solutions from the calculated beam space-charge and current-density. We shall describe numerical methods for orbit tracking on a hexahedron mesh. Topics include: 1) identification of elements along the particle trajectory, 2) fast searches and adaptive field calculations, 3) interpolation methods to terminate orbits on material surfaces, 4) automatic particle generation on multiple emission surfaces to model space-charge-limited emission and field emission, 5) flexible Child law algorithms, 6) implementation of the dual potential model for 3D magnetostatics, and 7) assignment of charge and current from model particle orbits for self-consistent fields.

  7. IMPETUS: consistent SPH calculations of 3D spherical Bondi accretion on to a black hole

    NASA Astrophysics Data System (ADS)

    Ramírez-Velasquez, J. M.; Sigalotti, L. Di G.; Gabbasov, R.; Cruz, F.; Klapp, J.

    2018-07-01

    We present three-dimensional calculations of spherically symmetric Bondi accretion on to a stationary supermassive black hole of mass 108 M⊙ within a radial range of 0.02-10 pc, using a modified version of the smoothed particle hydrodynamics GADGET-2 code, which ensures approximate first-order consistency (i.e. second-order accuracy) for the particle approximation. First-order consistency is restored by allowing the number of neighbours, nneigh, and the smoothing length, h, to vary with the total number of particles, N, such that the asymptotic limits nneigh → ∞ and h → 0 hold as N → ∞. The ability of the method to reproduce the isothermal (γ = 1) and adiabatic (γ = 5/3) Bondi accretion is investigated with increased spatial resolution. In particular, for the isothermal models, the numerical radial profiles closely match the Bondi solution, except near the accretor, where the density and radial velocity are slightly underestimated. However, as nneigh is increased and h is decreased, the calculations approach first-order consistency and the deviations from the Bondi solution decrease. The density and radial velocity profiles for the adiabatic models are qualitatively similar to those for the isothermal Bondi accretion. Steady-state Bondi accretion is reproduced by the highly resolved consistent models with a percent relative error of ≲ 1 per cent for γ = 1 and ˜9 per cent for γ = 5/3, with the adiabatic accretion taking longer than the isothermal case to reach steady flow. The performance of the method is assessed by comparing the results with those obtained using the standard GADGET-2 and GIZMO codes.

  8. Microparticles: Facile and High-Throughput Synthesis of Functional Microparticles with Quick Response Codes (Small 24/2016).

    PubMed

    Ramirez, Lisa Marie S; He, Muhan; Mailloux, Shay; George, Justin; Wang, Jun

    2016-06-01

    Microparticles carrying quick response (QR) barcodes are fabricated by J. Wang and co-workers on page 3259, using a massive coding of dissociated elements (MiCODE) technology. Each microparticle can bear a special custom-designed QR code that enables encryption or tagging with unlimited multiplexity, and the QR code can be easily read by cellphone applications. The utility of MiCODE particles in multiplexed DNA detection and microtagging for anti-counterfeiting is explored. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    NASA Astrophysics Data System (ADS)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  10. Building 1D resonance broadened quasilinear (RBQ) code for fast ions Alfvénic relaxations

    NASA Astrophysics Data System (ADS)

    Gorelenkov, Nikolai; Duarte, Vinicius; Berk, Herbert

    2016-10-01

    The performance of the burning plasma is limited by the confinement of superalfvenic fusion products, e.g. alpha particles, which are capable of resonating with the Alfvénic eigenmodes (AEs). The effect of AEs on fast ions is evaluated using a resonance line broadened diffusion coefficient. The interaction of fast ions and AEs is captured for cases where there are either isolated or overlapping modes. A new code RBQ1D is being built which constructs diffusion coefficients based on realistic eigenfunctions that are determined by the ideal MHD code NOVA. The wave particle interaction can be reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. Hence to a good approximation the Quasi-Linear (QL) diffusion equation only contains derivatives in the angular momentum. The diffusion equation is then one dimensional that is efficiently solved simultaneously for all particles with the equation for the evolution of the wave angular momentum. The evolution of fast ion constants of motion is governed by the QL diffusion equations which are adapted to find the ion distribution function.

  11. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  12. Spacecraft charging analysis with the implicit particle-in-cell code iPic3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deca, J.; Lapenta, G.; Marchand, R.

    2013-10-15

    We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation resultsmore » with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.« less

  13. Shock behavior of explosives about the c-j (Chapman-Juget) point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, P.W.

    1989-01-01

    Experimental data for pressure and particle velocity along the Hugoniot of detonation reaction products for a number of explosives are correlated in a reduced parameter form P/P/sub cj/ versus u/u/sub cj/. Two correlations are found: P/P/sub cj/ = a + b(u/u/sub cj/) + c(u/u/sub cj/)/sup 2/ when P/P/sub cj/>0.08, and P/P/sub cj/ = m(u/u/sub cj/)/sup n/ when P/P/sub cj/<0.08. The correlations yield results that agree reasonably with code calculations. 13 refs., 3 figs., 1 tab.

  14. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  15. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, andmore » data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.« less

  16. Particle trajectory computation on a 3-dimensional engine inlet. Final Report Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, J. J.

    1986-01-01

    A 3-dimensional particle trajectory computer code was developed to compute the distribution of water droplet impingement efficiency on a 3-dimensional engine inlet. The computed results provide the essential droplet impingement data required for the engine inlet anti-icing system design and analysis. The droplet trajectories are obtained by solving the trajectory equation using the fourth order Runge-Kutta and Adams predictor-corrector schemes. A compressible 3-D full potential flow code is employed to obtain a cylindrical grid definition of the flowfield on and about the engine inlet. The inlet surface is defined mathematically through a system of bi-cubic parametric patches in order to compute the droplet impingement points accurately. Analysis results of the 3-D trajectory code obtained for an axisymmetric droplet impingement problem are in good agreement with NACA experimental data. Experimental data are not yet available for the engine inlet impingement problem analyzed. Applicability of the method to solid particle impingement problems, such as engine sand ingestion, is also demonstrated.

  17. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  18. gadfly: A pandas-based Framework for Analyzing GADGET Simulation Data

    NASA Astrophysics Data System (ADS)

    Hummel, Jacob A.

    2016-11-01

    We present the first public release (v0.1) of the open-source gadget Dataframe Library: gadfly. The aim of this package is to leverage the capabilities of the broader python scientific computing ecosystem by providing tools for analyzing simulation data from the astrophysical simulation codes gadget and gizmo using pandas, a thoroughly documented, open-source library providing high-performance, easy-to-use data structures that is quickly becoming the standard for data analysis in python. Gadfly is a framework for analyzing particle-based simulation data stored in the HDF5 format using pandas DataFrames. The package enables efficient memory management, includes utilities for unit handling, coordinate transformations, and parallel batch processing, and provides highly optimized routines for visualizing smoothed-particle hydrodynamics data sets.

  19. Status of LANL Efforts to Effectively Use Sequoia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William David

    2015-05-14

    Los Alamos National Laboratory (LANL) is currently working on 3 new production applications, VPC, xRage, and Pagosa. VPIC was designed to be a 3D relativist, electromagnetic Particle-In-Cell code for plasma simulation. xRage, a 3D AMR mesh amd multi physics hydro code. Pagosa, is a 3D structured mesh and multi physics hydro code.

  20. Final Report for Geometric Observers and Particle Filtering for Controlled Active Vision

    DTIC Science & Technology

    2016-12-15

    code) 15-12-2016 Final Report 01Sep06 - 09May11 Final Report for Geometric Observers & Particle Filtering for Controlled Active Vision 49414-NS.1Allen...Observers and Particle Filtering for Controlled Active Vision by Allen R. Tannenbaum School of Electrical and Computer Engineering Georgia Institute of...7 2.2.4 Conformal Area Minimizing Flows . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Particle Filters

  1. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  2. Latent uncertainties of the precalculated track Monte Carlo method.

    PubMed

    Renaud, Marc-André; Roberge, David; Seuntjens, Jan

    2015-01-01

    While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  3. Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code

    NASA Astrophysics Data System (ADS)

    Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.

    2012-12-01

    We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.

  4. Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors

    NASA Astrophysics Data System (ADS)

    Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.

    2016-05-01

    This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.

  5. Three-dimensional water droplet trajectory code validation using an ECS inlet geometry

    NASA Technical Reports Server (NTRS)

    Breer, Marlin D.; Goodman, Mark P.

    1993-01-01

    A task was completed under NASA contract, the purpose of which was to validate a three-dimensional particle trajectory code with existing test data obtained from the Icing Research Tunnel at NASA-LeRC. The geometry analyzed was a flush-mounted environmental control system (ECS) inlet. Results of the study indicated good overall agreement between analytical predictions and wind tunnel test results at most flight conditions. Difficulties were encountered when predicting impingement characteristics of the droplets less than or equal to 13.5 microns in diameter. This difficulty was corrected to some degree by modifications to a module of the particle trajectory code; however, additional modifications will be required to accurately predict impingement characteristics of smaller droplets.

  6. Comparison of Stopping Power and Range Databases for Radiation Transport Study

    NASA Technical Reports Server (NTRS)

    Tai, H.; Bichsel, Hans; Wilson, John W.; Shinn, Judy L.; Cucinotta, Francis A.; Badavi, Francis F.

    1997-01-01

    The codes used to calculate stopping power and range for the space radiation shielding program at the Langley Research Center are based on the work of Ziegler but with modifications. As more experience is gained from experiments at heavy ion accelerators, prudence dictates a reevaluation of the current databases. Numerical values of stopping power and range calculated from four different codes currently in use are presented for selected ions and materials in the energy domain suitable for space radiation transport. This study of radiation transport has found that for most collision systems and for intermediate particle energies, agreement is less than 1 percent, in general, among all the codes. However, greater discrepancies are seen for heavy systems, especially at low particle energies.

  7. Computer and laboratory simulation of interactions between spacecraft surfaces and charged-particle environments

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.

  8. Inter-comparison of Computer Codes for TRISO-based Fuel Micro-Modeling and Performance Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Boer; Chang Keun Jo; Wen Wu

    2010-10-01

    The Next Generation Nuclear Plant (NGNP), the Deep Burn Pebble Bed Reactor (DB-PBR) and the Deep Burn Prismatic Block Reactor (DB-PMR) are all based on fuels that use TRISO particles as their fundamental constituent. The TRISO particle properties include very high durability in radiation environments, hence the designs reliance on the TRISO to form the principal barrier to radioactive materials release. This durability forms the basis for the selection of this fuel type for applications such as Deep Bun (DB), which require exposures up to four times those expected for light water reactors. It follows that the study and predictionmore » of the durability of TRISO particles must be carried as part of the safety and overall performance characterization of all the designs mentioned above. Such evaluations have been carried out independently by the performers of the DB project using independently developed codes. These codes, PASTA, PISA and COPA, incorporate models for stress analysis on the various layers of the TRISO particle (and of the intervening matrix material for some of them), model for fission products release and migration then accumulation within the SiC layer of the TRISO particle, just next to the layer, models for free oxygen and CO formation and migration to the same location, models for temperature field modeling within the various layers of the TRISO particle and models for the prediction of failure rates. All these models may be either internal to the code or external. This large number of models and the possibility of different constitutive data and model formulations and the possibility of a variety of solution techniques makes it highly unlikely that the model would give identical results in the modeling of identical situations. The purpose of this paper is to present the results of an inter-comparison between the codes and to identify areas of agreement and areas that need reconciliation. The inter-comparison has been carried out by the cooperating institutions using a set of pre-defined TRISO conditions (burnup levels, temperature or power levels, etc.) and the outcome will be tabulated in the full length paper. The areas of agreement will be pointed out and the areas that require further modeling or reconciliation will be shown. In general the agreement between the codes is good within less than one order of magnitude in the prediction of TRISO failure rates.« less

  9. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes

    2017-06-01

    Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.

  10. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  11. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  12. Summary Report of Working Group 2: Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-22

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less

  13. Linearized T-Matrix and Mie Scattering Computations

    NASA Technical Reports Server (NTRS)

    Spurr, R.; Wang, J.; Zeng, J.; Mishchenko, M. I.

    2011-01-01

    We present a new linearization of T-Matrix and Mie computations for light scattering by non-spherical and spherical particles, respectively. In addition to the usual extinction and scattering cross-sections and the scattering matrix outputs, the linearized models will generate analytical derivatives of these optical properties with respect to the real and imaginary parts of the particle refractive index, and (for non-spherical scatterers) with respect to the ''shape'' parameter (the spheroid aspect ratio, cylinder diameter/height ratio, Chebyshev particle deformation factor). These derivatives are based on the essential linearity of Maxwell's theory. Analytical derivatives are also available for polydisperse particle size distribution parameters such as the mode radius. The T-matrix formulation is based on the NASA Goddard Institute for Space Studies FORTRAN 77 code developed in the 1990s. The linearized scattering codes presented here are in FORTRAN 90 and will be made publicly available.

  14. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model.more » The code is guilt atop the Python interpreter language.« less

  15. 50 GFlops molecular dynamics on the Connection Machine 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomdahl, P.S.; Tamayo, P.; Groenbech-Jensen, N.

    1993-12-31

    The authors present timings and performance numbers for a new short range three dimensional (3D) molecular dynamics (MD) code, SPaSM, on the Connection Machine-5 (CM-5). They demonstrate that runs with more than 10{sup 8} particles are now possible on massively parallel MIMD computers. To the best of their knowledge this is at least an order of magnitude more particles than what has previously been reported. Typical production runs show sustained performance (including communication) in the range of 47--50 GFlops on a 1024 node CM-5 with vector units (VUs). The speed of the code scales linearly with the number of processorsmore » and with the number of particles and shows 95% parallel efficiency in the speedup.« less

  16. Simulation of Charge Collection in Diamond Detectors Irradiated with Deuteron-Triton Neutron Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milocco, Alberto; Trkov, Andrej; Pillon, Mario

    2011-12-13

    Diamond-based neutron spectrometers exhibit outstanding properties such as radiation hardness, low sensitivity to gamma rays, fast response and high-energy resolution. They represent a very promising application of diamonds for plasma diagnostics in fusion devices. The measured pulse height spectrum is obtained from the collection of helium and beryllium ions produced by the reactions on {sup 12}C. An original code is developed to simulate the production and the transport of charged particles inside the diamond detector. The ion transport methodology is based on the well-known TRIM code. The reactions of interest are triggered using the ENDF/B-VII.0 nuclear data for the neutronmore » interactions on carbon. The model is implemented in the TALLYX subroutine of the MCNP5 and MCNPX codes. Measurements with diamond detectors in a {approx}14 MeV neutron field have been performed at the FNG (Rome, Italy) and IRMM (Geel, Belgium) facilities. The comparison of the experimental data with the simulations validates the proposed model.« less

  17. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  18. Study on friction coefficient of soft soil based on particle flow code

    NASA Astrophysics Data System (ADS)

    Lei, Xiaohong; Zhang, Zhongwei

    2017-04-01

    There has no uniform method for determining the micro parameters in particle flow code, and the corresponding formulas obtained by each scholar can only be applied to similar situations. In this paper, the relationship between the micro parameters friction coefficient and macro parameters friction angle is established by using the two axis servo compression as the calibration experiment, and the corresponding formula is fitted to solve the difficulties of determining the PFC micro parameters which provide a reference for determination of the micro parameters of soft soil.

  19. High altitude chemically reacting gas particle mixtures. Volume 1: A theoretical analysis and development of the numerical solution. [rocket nozzle and orbital plume flow fields

    NASA Technical Reports Server (NTRS)

    Smith, S. D.

    1984-01-01

    The overall contractual effort and the theory and numerical solution for the Reacting and Multi-Phase (RAMP2) computer code are described. The code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. Fundamental equations for steady flow of reacting gas-particle mixtures, method of characteristics, mesh point construction, and numerical integration of the conservation equations are considered herein.

  20. Spheromak reactor-design study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Les, J.M.

    1981-06-30

    A general overview of spheromak reactor characteristics, such as MHD stability, start up, and plasma geometry is presented. In addition, comparisons are made between spheromaks, tokamaks and field reversed mirrors. The computer code Sphero is also discussed. Sphero is a zero dimensional time independent transport code that uses particle confinement times and profile parameters as input since they are not known with certainty at the present time. More specifically, Sphero numerically solves a given set of transport equations whose solutions include such variables as fuel ion (deuterium and tritium) density, electron density, alpha particle density and ion, electron temperatures.

  1. Terahertz wave manipulation based on multi-bit coding artificial electromagnetic surfaces

    NASA Astrophysics Data System (ADS)

    Li, Jiu-Sheng; Zhao, Ze-Jiang; Yao, Jian-Quan

    2018-05-01

    A polarization insensitive multi-bit coding artificial electromagnetic surface is proposed for terahertz wave manipulation. The coding artificial electromagnetic surfaces composed of four-arrow-shaped particles with certain coding sequences can generate multi-bit coding in the terahertz frequencies and manipulate the reflected terahertz waves to the numerous directions by using of different coding distributions. Furthermore, we demonstrate that our coding artificial electromagnetic surfaces have strong abilities to reduce the radar cross section with polarization insensitive for TE and TM incident terahertz waves as well as linear-polarized and circular-polarized terahertz waves. This work offers an effectively strategy to realize more powerful manipulation of terahertz wave.

  2. Numerical Analysis of Dusty-Gas Flows

    NASA Astrophysics Data System (ADS)

    Saito, T.

    2002-02-01

    This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.

  3. UNIPIC code for simulations of high power microwave devices

    NASA Astrophysics Data System (ADS)

    Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze

    2009-03-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  4. Simulation of Turbulent Combustion Fields of Shock-Dispersed Aluminum Using the AMR Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhl, A L; Bell, J B; Beckner, V E

    2006-11-02

    We present a Model for simulating experiments of combustion in Shock-Dispersed-Fuel (SDF) explosions. The SDF charge consisted of a 0.5-g spherical PETN booster, surrounded by 1-g of fuel powder (flake Aluminum). Detonation of the booster charge creates a high-temperature, high-pressure source (PETN detonation products gases) that both disperses the fuel and heats it. Combustion ensues when the fuel mixes with air. The gas phase is governed by the gas-dynamic conservation laws, while the particle phase obeys the continuum mechanics laws for heterogeneous media. The two phases exchange mass, momentum and energy according to inter-phase interaction terms. The kinetics model usedmore » an empirical particle burn relation. The thermodynamic model considers the air, fuel and booster products to be of frozen composition, while the Al combustion products are assumed to be in equilibrium. The thermodynamic states were calculated by the Cheetah code; resulting state points were fit with analytic functions suitable for numerical simulations. Numerical simulations of combustion of an Aluminum SDF charge in a 6.4-liter chamber were performed. Computed pressure histories agree with measurements.« less

  5. 3D Field Modifications of Core Neutral Fueling In the EMC3-EIRENE Code

    NASA Astrophysics Data System (ADS)

    Waters, Ian; Frerichs, Heinke; Schmitz, Oliver; Ahn, Joon-Wook; Canal, Gustavo; Evans, Todd; Feng, Yuehe; Kaye, Stanley; Maingi, Rajesh; Soukhanovskii, Vsevolod

    2017-10-01

    The application of 3-D magnetic field perturbations to the edge plasmas of tokamaks has long been seen as a viable way to control damaging Edge Localized Modes (ELMs). These 3-D fields have also been correlated with a density drop in the core plasmas of tokamaks; known as `pump-out'. While pump-out is typically explained as the result of enhanced outward transport, degraded fueling of the core may also play a role. By altering the temperature and density of the plasma edge, 3-D fields will impact the distribution function of high energy neutral particles produced through ion-neutral energy exchange processes. Starved of the deeply penetrating neutral source, the core density will decrease. Numerical studies carried out with the EMC3-EIRENE code on National Spherical Tokamak eXperiment-Upgrade (NSTX-U) equilibria show that this change to core fueling by high energy neutrals may be a significant contributor to the overall particle balance in the NSTX-U tokamak: deep core (Ψ < 0.5) fueling from neutral ionization sources is decreased by 40-60% with RMPs. This work was funded by the US Department of Energy under Grant DE-SC0012315.

  6. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-10-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the codemore » necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment.« less

  7. Annual Technical Report Number 2 for Grant Number AFOSR-90-0085, Center for Theoretical Geoplasma Physics, Center for Space Research, Massachusetts Institute of Technology

    DTIC Science & Technology

    1992-02-15

    Elena Villaldn, Michael B. Silevitch, William J. Burke, and Paul L. Rothwell Artificial Electron Beams in the Magnetosphere and Ionosphere 385 by John R...of California, Los Angeles; Tom Chang,3 Massachusetts Institute of Technology; Paul Dusenbery, University of Colorado 3 Monday, February 17, 1992...0. Buneinan, and T. Simulation Studies of Electron Beam-Driven Neubert Instabilities by a 3-D Electromagnetic Particle Code I 9:45 a.m. P. L

  8. Detection of Capsule Tampering by Near-Infrared Reflectance Analysis.

    DTIC Science & Technology

    1987-08-01

    22b. TELEPHONE (Include Area Code) I 22c OFFICE SYM "!f Gar H. HieftJe (812) 335-2189 DO FORM 1473,84 MAR 83 APR edition may be used until exhausted...500 to S0 mg of KCN, and the KCN consisted of fairly large crystals while the analgesic was a powder of small particle size (4,12). Potassium cyanide...10) usually necessary, so NIRA instruments are relatively inexpensive. Little or no ample preparation is required in NIRA, and powders can be directly

  9. Engine Cycle Analysis for a Particle Bed Reactor Nuclear Rocket

    DTIC Science & Technology

    1991-03-01

    0 DTIC USERS UNCLASSIFIED 22a. NAME OF RESPONSIBLE INDIVIDUAL ZZb. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOL Lt Timothy J . Lawrence 805-275...Cycle with 2000 MW PBR and Uncooled Nozzle J : Output for Bleed Cycle with 2000 MW PBR and Cooled Nozzle K: Output for Expander Cycle with 2000 MW PBR L...Mars with carbon dioxide, the primary component of the Martian atmosphere. Carbon dioxide would delivera smaller ! j , but its use would eliminate the

  10. Simulation of Fluid Flow and Collection Efficiency for an SEA Multi-element Probe

    NASA Technical Reports Server (NTRS)

    Rigby, David L.; Struk, Peter M.; Bidwell, Colin

    2014-01-01

    Numerical simulations of fluid flow and collection efficiency for a Science Engineering Associates (SEA) multi-element probe are presented. Simulation of the flow field was produced using the Glenn-HT Navier-Stokes solver. Three dimensional unsteady results were produced and then time averaged for the collection efficiency results. Three grid densities were investigated to enable an assessment of grid dependence. Collection efficiencies were generated for three spherical particle sizes, 100, 20, and 5 micron in diameter, using the codes LEWICE3D and LEWICE2D. The free stream Mach number was 0.27, representing a velocity of approximately 86 ms. It was observed that a reduction in velocity of about 15-20 occurred as the flow entered the shroud of the probe.Collection efficiency results indicate a reduction in collection efficiency as particle size is reduced. The reduction with particle size is expected, however, the results tended to be lower than previous results generated for isolated two-dimensional elements. The deviation from the two-dimensional results is more pronounced for the smaller particles and is likely due to the effect of the protective shroud.

  11. A hydrodynamic treatment of the tilted cold dark matter cosmological scenario

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1993-01-01

    A standard hydrodynamic code coupled with a particle-mesh code is used to compute the evolution of a tilted cold dark matter (TCDM) model containing both baryonic matter and dark matter. Six baryonic species are followed, with allowance for both collisional and radiative ionization in every cell. The mean final Zel'dovich-Sunyaev y parameter is estimated to be (5.4 +/- 2.7) x 10 exp -7, below currently attainable observations, with an rms fluctuation of about (6.0 +/- 3.0) x 10 exp -7 on arcmin scales. The rate of galaxy formation peaks at a relatively late epoch (z is about 0.5). In the case of mass function, the smallest objects are stabilized against collapse by thermal energy: the mass-weighted mass spectrum peaks in the vicinity of 10 exp 9.1 solar masses, with a reasonable fit to the Schechter luminosity function if the baryon mass to blue light ratio is about 4. It is shown that a bias factor of 2 required for the model to be consistent with COBE DMR signals is probably a natural outcome in the present multiple component simulations.

  12. Coupling of in-situ X-ray Microtomography Observations with Discrete Element Simulations-Application to Powder Sintering

    NASA Astrophysics Data System (ADS)

    Olmos, L.; Bouvard, D.; Martin, C. L.; Bellet, D.; Di Michiel, M.

    2009-06-01

    The sintering of both a powder with a wide particle size distribution (0-63 μm) and of a powder with artificially created pores is investigated by coupling in situ X-ray microtomography observations with Discrete Element simulations. The micro structure evolution of the copper particles is observed by microtomography all along a typical sintering cycle at 1050° C at the European Synchrotron Research Facilities (ESRF, Grenoble, France). A quantitative analysis of the 3D images provides original data on interparticle indentation, coordination and particle displacements throughout sintering. In parallel, the sintering of similar powder systems has been simulated with a discrete element code which incorporates appropriate sintering contact laws from the literature. The initial numerical packing is generated directly from the 3D microtomography images or alternatively from a random set of particles with the same size distribution. The comparison between the information drawn from the simulations and the one obtained by tomography leads to the conclusion that the first method is not satisfactory because real particles are not perfectly spherical as the numerical ones. On the opposite the packings built with the second method show sintering behaviors close to the behaviors of real materials, although particle rearrangement is underestimated by DEM simulations.

  13. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  14. Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2015-11-01

    FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.

  15. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  16. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    NASA Astrophysics Data System (ADS)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  17. Full-f version of GENE for turbulence in open-field-line systems

    NASA Astrophysics Data System (ADS)

    Pan, Q.; Told, D.; Shi, E. L.; Hammett, G. W.; Jenko, F.

    2018-06-01

    Unique properties of plasmas in the tokamak edge, such as large amplitude fluctuations and plasma-wall interactions in the open-field-line regions, require major modifications of existing gyrokinetic codes originally designed for simulating core turbulence. To this end, the global version of the 3D2V gyrokinetic code GENE, so far employing a δf-splitting technique, is extended to simulate electrostatic turbulence in straight open-field-line systems. The major extensions are the inclusion of the velocity-space nonlinearity, the development of a conducting-sheath boundary, and the implementation of the Lenard-Bernstein collision operator. With these developments, the code can be run as a full-f code and can handle particle loss to and reflection from the wall. The extended code is applied to modeling turbulence in the Large Plasma Device (LAPD), with a reduced mass ratio and a much lower collisionality. Similar to turbulence in a tokamak scrape-off layer, LAPD turbulence involves collisions, parallel streaming, cross-field turbulent transport with steep profiles, and particle loss at the parallel boundary.

  18. A molecular dynamics implementation of the 3D Mercedes-Benz water model

    NASA Astrophysics Data System (ADS)

    Hynninen, T.; Dias, C. L.; Mkrtchyan, A.; Heinonen, V.; Karttunen, M.; Foster, A. S.; Ala-Nissila, T.

    2012-02-01

    The three-dimensional Mercedes-Benz model was recently introduced to account for the structural and thermodynamic properties of water. It treats water molecules as point-like particles with four dangling bonds in tetrahedral coordination, representing H-bonds of water. Its conceptual simplicity renders the model attractive in studies where complex behaviors emerge from H-bond interactions in water, e.g., the hydrophobic effect. A molecular dynamics (MD) implementation of the model is non-trivial and we outline here the mathematical framework of its force-field. Useful routines written in modern Fortran are also provided. This open source code is free and can easily be modified to account for different physical context. The provided code allows both serial and MPI-parallelized execution. Program summaryProgram title: CASHEW (Coarse Approach Simulator for Hydrogen-bonding Effects in Water) Catalogue identifier: AEKM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 20 501 No. of bytes in distributed program, including test data, etc.: 551 044 Distribution format: tar.gz Programming language: Fortran 90 Computer: Program has been tested on desktop workstations and a Cray XT4/XT5 supercomputer. Operating system: Linux, Unix, OS X Has the code been vectorized or parallelized?: The code has been parallelized using MPI. RAM: Depends on size of system, about 5 MB for 1500 molecules. Classification: 7.7 External routines: A random number generator, Mersenne Twister ( http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/VERSIONS/FORTRAN/mt95.f90), is used. A copy of the code is included in the distribution. Nature of problem: Molecular dynamics simulation of a new geometric water model. Solution method: New force-field for water molecules, velocity-Verlet integration, representation of molecules as rigid particles with rotations described using quaternion algebra. Restrictions: Memory and cpu time limit the size of simulations. Additional comments: Software web site: https://gitorious.org/cashew/. Running time: Depends on the size of system. The sample tests provided only take a few seconds.

  19. Geometrical-optics code for computing the optical properties of large dielectric spheres.

    PubMed

    Zhou, Xiaobing; Li, Shusun; Stamnes, Knut

    2003-07-20

    Absorption of electromagnetic radiation by absorptive dielectric spheres such as snow grains in the near-infrared part of the solar spectrum cannot be neglected when radiative properties of snow are computed. Thus a new, to our knowledge, geometrical-optics code is developed to compute scattering and absorption cross sections of large dielectric particles of arbitrary complex refractive index. The number of internal reflections and transmissions are truncated on the basis of the ratio of the irradiance incident at the nth interface to the irradiance incident at the first interface for a specific optical ray. Thus the truncation number is a function of the angle of incidence. Phase functions for both near- and far-field absorption and scattering of electromagnetic radiation are calculated directly at any desired scattering angle by using a hybrid algorithm based on the bisection and Newton-Raphson methods. With these methods a large sphere's absorption and scattering properties of light can be calculated for any wavelength from the ultraviolet to the microwave regions. Assuming that large snow meltclusters (1-cm order), observed ubiquitously in the snow cover during summer, can be characterized as spheres, one may compute absorption and scattering efficiencies and the scattering phase function on the basis of this geometrical-optics method. A geometrical-optics method for sphere (GOMsphere) code is developed and tested against Wiscombe's Mie scattering code (MIE0) and a Monte Carlo code for a range of size parameters. GOMsphere can be combined with MIE0 to calculate the single-scattering properties of dielectric spheres of any size.

  20. Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges

    NASA Astrophysics Data System (ADS)

    Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver

    2017-10-01

    The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.

  1. 3D Reconnection and SEP Considerations in the CME-Flare Problem

    NASA Astrophysics Data System (ADS)

    Moschou, S. P.; Cohen, O.; Drake, J. J.; Sokolov, I.; Borovikov, D.; Alvarado Gomez, J. D.; Garraffo, C.

    2017-12-01

    Reconnection is known to play a major role in particle acceleration in both solar and astrophysical regimes, yet little is known about its connection with the global scales and its comparative contribution in the generation of SEPs with respect to other acceleration mechanisms, such as the shock at a fast CME front, in the presence of a global structure such as a CME. Coupling efforts, combining both particle and global scales, are necessary to answer questions about the fundamentals of the energetic processes evolved. We present such a coupling modeling effort that looks into particle acceleration through reconnection in a self-consistent CME-flare model in both particle and fluid regimes. Of special interest is the supra-thermal component of the acceleration due to the reconnection that will at a later time interact colliding with the solar atmospheric material of the more dense chromospheric layer and radiate in hard X- and γ-rays for super-thermal electrons and protons respectively. Two cutting edge computational codes are used to capture the global CME and flare dynamics, specifically a two fluid MHD code and a 3D PIC code for the flare scales. Finally, we are connecting the simulations with current observations in different wavelengths in an effort to shed light to the unified CME-flare picture.

  2. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  3. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  4. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  5. Microdosimetric evaluation of the neutron field for BNCT at Kyoto University reactor by using the PHITS code.

    PubMed

    Baba, H; Onizuka, Y; Nakao, M; Fukahori, M; Sato, T; Sakurai, Y; Tanaka, H; Endo, S

    2011-02-01

    In this study, microdosimetric energy distributions of secondary charged particles from the (10)B(n,α)(7)Li reaction in boron-neutron capture therapy (BNCT) field were calculated using the Particle and Heavy Ion Transport code System (PHITS). The PHITS simulation was performed to reproduce the geometrical set-up of an experiment that measured the microdosimetric energy distributions at the Kyoto University Reactor where two types of tissue-equivalent proportional counters were used, one with A-150 wall alone and another with a 50-ppm-boron-loaded A-150 wall. It was found that the PHITS code is a useful tool for the simulation of the energy deposited in tissue in BNCT based on the comparisons with experimental results.

  6. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  7. A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation

    NASA Astrophysics Data System (ADS)

    Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.

    2018-03-01

    We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.

  8. New methods in WARP, a particle-in-cell code for space-charge dominated beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D., LLNL

    1998-01-12

    The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less

  9. Feasibility study for combination of field-flow fractionation (FFF)-based separation of size-coded particle probes with amplified surface enhanced Raman scattering (SERS) tagging for simultaneous detection of multiple miRNAs.

    PubMed

    Shin, Kayeong; Choi, Jaeyeong; Kim, Yeoju; Lee, Yoonjeong; Kim, Joohoon; Lee, Seungho; Chung, Hoeil

    2018-06-29

    We propose a new analytical scheme in which field-flow fractionation (FFF)-based separation of target-specific polystyrene (PS) particle probes of different sizes are incorporated with amplified surface-enhanced Raman scattering (SERS) tagging for the simultaneous and sensitive detection of multiple microRNAs (miRNAs). For multiplexed detection, PS particles of three different diameters (15, 10, 5 μm) were used for the size-coding, and a probe single stranded DNA (ssDNA) complementary to a target miRNA was conjugated on an intended PS particle. After binding of a target miRNA on PS probe, polyadenylation reaction was executed to generate a long tail composed of adenine (A) serving as a binding site to thymine (T) conjugated Au nanoparticles (T-AuNPs) to increase SERS intensity. The three size-coded PS probes bound with T-AuNPs were then separated in a FFF channel. With the observation of extinction-based fractograms, separation of three size-coded PS probes was clearly confirmed, thereby enabling of measuring three miRNAs simultaneously. Raman intensities of FFF fractions collected at the peak maximum of 15, 10 and 5 μm PS probes varied fairy quantitatively with the change of miRNA concentrations, and the reproducibility of measurement was acceptable. The proposed method is potentially useful for simultaneous detection of multiple miRNAs with high sensitivity. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Estimating cirrus cloud properties from MIPAS data

    NASA Astrophysics Data System (ADS)

    Mendrok, J.; Schreier, F.; Höpfner, M.

    2007-04-01

    High resolution mid-infrared limb emission spectra observed by the spaceborne Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) showing evidence of cloud interference are analyzed. Using the new line-by-line multiple scattering [Approximate] Spherical Atmospheric Radiative Transfer code (SARTre), a sensitivity study with respect to cirrus cloud parameters, e.g., optical thickness and particle size distribution, is performed. Cirrus properties are estimated by fitting spectra in three distinct microwindows between 8 and 12 μm. For a cirrus with extremely low ice water path (IWP = 0.1 g/m2) and small effective particle size (D e = 10 μm) simulated spectra are in close agreement with observations in broadband signal and fine structures. We show that a multi-microwindow technique enhances reliability of MIPAS cirrus retrievals compared to single microwindow methods.

  11. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  12. Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A

    2005-01-01

    The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.

  13. Agricultural Spraying

    NASA Technical Reports Server (NTRS)

    1986-01-01

    AGDISP, a computer code written for Langley by Continuum Dynamics, Inc., aids crop dusting airplanes in targeting pesticides. The code is commercially available and can be run on a personal computer by an inexperienced operator. Called SWA+H, it is used by the Forest Service, FAA, DuPont, etc. DuPont uses the code to "test" equipment on the computer using a laser system to measure particle characteristics of various spray compounds.

  14. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less

  15. Comparison of fluence-to-dose conversion coefficients for deuterons, tritons and helions.

    PubMed

    Copeland, Kyle; Friedberg, Wallace; Sato, Tatsuhiko; Niita, Koji

    2012-02-01

    Secondary radiation in aircraft and spacecraft includes deuterons, tritons and helions. Two sets of fluence-to-effective dose conversion coefficients for isotropic exposure to these particles were compared: one used the particle and heavy ion transport code system (PHITS) radiation transport code coupled with the International Commission on Radiological Protection (ICRP) reference phantoms (PHITS-ICRP) and the other the Monte Carlo N-Particle eXtended (MCNPX) radiation transport code coupled with modified BodyBuilder™ phantoms (MCNPX-BB). Also, two sets of fluence-to-effective dose equivalent conversion coefficients calculated using the PHITS-ICRP combination were compared: one used quality factors based on linear energy transfer; the other used quality factors based on lineal energy (y). Finally, PHITS-ICRP effective dose coefficients were compared with PHITS-ICRP effective dose equivalent coefficients. The PHITS-ICRP and MCNPX-BB effective dose coefficients were similar, except at high energies, where MCNPX-BB coefficients were higher. For helions, at most energies effective dose coefficients were much greater than effective dose equivalent coefficients. For deuterons and tritons, coefficients were similar when their radiation weighting factor was set to 2.

  16. Modelling of aircrew radiation exposure from galactic cosmic rays and solar particle events.

    PubMed

    Takada, M; Lewis, B J; Boudreau, M; Al Anid, H; Bennett, L G I

    2007-01-01

    Correlations have been developed for implementation into the semi-empirical Predictive Code for Aircrew Radiation Exposure (PCAIRE) to account for effects of extremum conditions of solar modulation and low altitude based on transport code calculations. An improved solar modulation model, as proposed by NASA, has been further adopted to interpolate between the bounding correlations for solar modulation. The conversion ratio of effective dose to ambient dose equivalent, as applied to the PCAIRE calculation (based on measurements) for the legal regulation of aircrew exposure, was re-evaluated in this work to take into consideration new ICRP-92 radiation-weighting factors and different possible irradiation geometries of the source cosmic-radiation field. A computational analysis with Monte Carlo N-Particle eXtended Code was further used to estimate additional aircrew exposure that may result from sporadic solar energetic particle events considering real-time monitoring by the Geosynchronous Operational Environmental Satellite. These predictions were compared with the ambient dose equivalent rates measured on-board an aircraft and to count rate data observed at various ground-level neutron monitors.

  17. Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.

    PubMed

    Zhong, Hualiang; Chetty, Indrin J

    2012-05-01

    Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.

  18. History by history statistical estimators in the BEAM code system.

    PubMed

    Walters, B R B; Kawrakow, I; Rogers, D W O

    2002-12-01

    A history by history method for estimating uncertainties has been implemented in the BEAMnrc and DOSXYznrc codes replacing the method of statistical batches. This method groups scored quantities (e.g., dose) by primary history. When phase-space sources are used, this method groups incident particles according to the primary histories that generated them. This necessitated adding markers (negative energy) to phase-space files to indicate the first particle generated by a new primary history. The new method greatly reduces the uncertainty in the uncertainty estimate. The new method eliminates one dimension (which kept the results for each batch) from all scoring arrays, resulting in memory requirement being decreased by a factor of 2. Correlations between particles in phase-space sources are taken into account. The only correlations with any significant impact on uncertainty are those introduced by particle recycling. Failure to account for these correlations can result in a significant underestimate of the uncertainty. The previous method of accounting for correlations due to recycling by placing all recycled particles in the same batch did work. Neither the new method nor the batch method take into account correlations between incident particles when a phase-space source is restarted so one must avoid restarts.

  19. Tracking Debris Shed by a Space-Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Stuart, Phillip C.; Rogers, Stuart E.

    2009-01-01

    The DEBRIS software predicts the trajectories of debris particles shed by a space-shuttle launch vehicle during ascent, to aid in assessing potential harm to the space-shuttle orbiter and crew. The user specifies the location of release and other initial conditions for a debris particle. DEBRIS tracks the particle within an overset grid system by means of a computational fluid dynamics (CFD) simulation of the local flow field and a ballistic simulation that takes account of the mass of the particle and its aerodynamic properties in the flow field. The computed particle trajectory is stored in a file to be post-processed by other software for viewing and analyzing the trajectory. DEBRIS supplants a prior debris tracking code that took .15 minutes to calculate a single particle trajectory: DEBRIS can calculate 1,000 trajectories in .20 seconds on a desktop computer. Other improvements over the prior code include adaptive time-stepping to ensure accuracy, forcing at least one step per grid cell to ensure resolution of all CFD-resolved flow features, ability to simulate rebound of debris from surfaces, extensive error checking, a builtin suite of test cases, and dynamic allocation of memory.

  20. Representation of particle motion in the auditory midbrain of a developing anuran.

    PubMed

    Simmons, Andrea Megela

    2015-07-01

    In bullfrog tadpoles, a "deaf period" of lessened responsiveness to the pressure component of sounds, evident during the end of the late larval period, has been identified in the auditory midbrain. But coding of underwater particle motion in the vestibular medulla remains stable over all of larval development, with no evidence of a "deaf period." Neural coding of particle motion in the auditory midbrain was assessed to determine if a "deaf period" for this mode of stimulation exists in this brain area in spite of its absence from the vestibular medulla. Recording sites throughout the developing laminar and medial principal nuclei show relatively stable thresholds to z-axis particle motion, up until the "deaf period." Thresholds then begin to increase from this point up through the rest of metamorphic climax, and significantly fewer responsive sites can be located. The representation of particle motion in the auditory midbrain is less robust during later compared to earlier larval stages, overlapping with but also extending beyond the restricted "deaf period" for pressure stimulation. The decreased functional representation of particle motion in the auditory midbrain throughout metamorphic climax may reflect ongoing neural reorganization required to mediate the transition from underwater to amphibious life.

  1. A photonic crystal hydrogel suspension array for the capture of blood cells from whole blood

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Cai, Yunlang; Shang, Luoran; Wang, Huan; Cheng, Yao; Rong, Fei; Gu, Zhongze; Zhao, Yuanjin

    2016-02-01

    Diagnosing hematological disorders based on the separation and detection of cells in the patient's blood is a significant challenge. We have developed a novel barcode particle-based suspension array that can simultaneously capture and detect multiple types of blood cells. The barcode particles are polyacrylamide (PAAm) hydrogel inverse opal microcarriers with characteristic reflection peak codes that remain stable during cell capture on their surfaces. The hydrophilic PAAm hydrogel scaffolds of the barcode particles can entrap various plasma proteins to capture different cells in the blood, with little damage to captured cells.Diagnosing hematological disorders based on the separation and detection of cells in the patient's blood is a significant challenge. We have developed a novel barcode particle-based suspension array that can simultaneously capture and detect multiple types of blood cells. The barcode particles are polyacrylamide (PAAm) hydrogel inverse opal microcarriers with characteristic reflection peak codes that remain stable during cell capture on their surfaces. The hydrophilic PAAm hydrogel scaffolds of the barcode particles can entrap various plasma proteins to capture different cells in the blood, with little damage to captured cells. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06368j

  2. GIZMO: Multi-method magneto-hydrodynamics+gravity code

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2014-10-01

    GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).

  3. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less

  4. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit

    2009-01-01

    Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.

  5. Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method

    NASA Technical Reports Server (NTRS)

    Winckelmans, G. S.

    1995-01-01

    This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.

  6. The Fireball integrated code package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobranich, D.; Powers, D.A.; Harper, F.T.

    1997-07-01

    Many deep-space satellites contain a plutonium heat source. An explosion, during launch, of a rocket carrying such a satellite offers the potential for the release of some of the plutonium. The fireball following such an explosion exposes any released plutonium to a high-temperature chemically-reactive environment. Vaporization, condensation, and agglomeration processes can alter the distribution of plutonium-bearing particles. The Fireball code package simulates the integrated response of the physical and chemical processes occurring in a fireball and the effect these processes have on the plutonium-bearing particle distribution. This integrated treatment of multiple phenomena represents a significant improvement in the state ofmore » the art for fireball simulations. Preliminary simulations of launch-second scenarios indicate: (1) most plutonium vaporization occurs within the first second of the fireball; (2) large non-aerosol-sized particles contribute very little to plutonium vapor production; (3) vaporization and both homogeneous and heterogeneous condensation occur simultaneously; (4) homogeneous condensation transports plutonium down to the smallest-particle sizes; (5) heterogeneous condensation precludes homogeneous condensation if sufficient condensation sites are available; and (6) agglomeration produces larger-sized particles but slows rapidly as the fireball grows.« less

  7. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  8. Particle tracing modeling of ion fluxes at geosynchronous orbit

    DOE PAGES

    Brito, Thiago V.; Woodroffe, Jesse; Jordanova, Vania K.; ...

    2017-10-31

    The initial results of a coupled MHD/particle tracing method to evaluate particle fluxes in the inner magnetosphere are presented. This setup is capable of capturing the earthward particle acceleration process resulting from dipolarization events in the tail region of the magnetosphere. On the period of study, the MHD code was able to capture a dipolarization event and the particle tracing algorithm was able to capture our results of these disturbances and calculate proton fluxes in the night side geosynchronous orbit region. The simulation captured dispersionless injections as well as the energy dispersion signatures that are frequently observed by satellites atmore » geosynchronous orbit. Currently, ring current models rely on Maxwellian-type distributions based on either empirical flux values or sparse satellite data for their boundary conditions close to geosynchronous orbit. In spite of some differences in intensity and timing, the setup presented here is able to capture substorm injections, which represents an improvement regarding a reverse way of coupling these ring current models with MHD codes through the use of boundary conditions.« less

  9. Particle tracing modeling of ion fluxes at geosynchronous orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Woodroffe, Jesse; Jordanova, Vania K.

    The initial results of a coupled MHD/particle tracing method to evaluate particle fluxes in the inner magnetosphere are presented. This setup is capable of capturing the earthward particle acceleration process resulting from dipolarization events in the tail region of the magnetosphere. On the period of study, the MHD code was able to capture a dipolarization event and the particle tracing algorithm was able to capture our results of these disturbances and calculate proton fluxes in the night side geosynchronous orbit region. The simulation captured dispersionless injections as well as the energy dispersion signatures that are frequently observed by satellites atmore » geosynchronous orbit. Currently, ring current models rely on Maxwellian-type distributions based on either empirical flux values or sparse satellite data for their boundary conditions close to geosynchronous orbit. In spite of some differences in intensity and timing, the setup presented here is able to capture substorm injections, which represents an improvement regarding a reverse way of coupling these ring current models with MHD codes through the use of boundary conditions.« less

  10. On electron heating in a low pressure capacitively coupled oxygen discharge

    NASA Astrophysics Data System (ADS)

    Gudmundsson, J. T.; Snorrason, D. I.

    2017-11-01

    We use the one-dimensional object-oriented particle-in-cell Monte Carlo collision code oopd1 to explore the charged particle densities, the electronegativity, the electron energy probability function, and the electron heating mechanism in a single frequency capacitively coupled oxygen discharge, when the applied voltage amplitude is varied. We explore discharges operated at 10 mTorr, where electron heating within the plasma bulk (the electronegative core) dominates, and at 50 mTorr, where sheath heating dominates. At 10 mTorr, the discharge is operated in a combined drift-ambipolar and α-mode, and at 50 mTorr, it is operated in the pure α-mode. At 10 mTorr, the effective electron temperature is high and increases with increased driving voltage amplitude, while at 50 mTorr, the effective electron temperature is much lower, in particular, within the electronegative core, where it is roughly 0.2-0.3 eV, and varies only a little with the voltage amplitude.

  11. Matter power spectrum and the challenge of percent accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less

  12. Adjoint of the global Eulerian-Lagrangian coupled atmospheric transport model (A-GELCA v1.0): development and validation

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry A.; Maksyutov, Shamil; Yaremchuk, Alexey; Ganshin, Alexander; Kaminski, Thomas; Blessing, Simon; Sasakawa, Motoki; Gomez-Pelaez, Angel J.; Starchenko, Alexander

    2016-02-01

    We present the development of the Adjoint of the Global Eulerian-Lagrangian Coupled Atmospheric (A-GELCA) model that consists of the National Institute for Environmental Studies (NIES) model as an Eulerian three-dimensional transport model (TM), and FLEXPART (FLEXible PARTicle dispersion model) as the Lagrangian Particle Dispersion Model (LPDM). The forward tangent linear and adjoint components of the Eulerian model were constructed directly from the original NIES TM code using an automatic differentiation tool known as TAF (Transformation of Algorithms in Fortran; http://www.FastOpt.com, with additional manual pre- and post-processing aimed at improving transparency and clarity of the code and optimizing the performance of the computing, including MPI (Message Passing Interface). The Lagrangian component did not require any code modification, as LPDMs are self-adjoint and track a significant number of particles backward in time in order to calculate the sensitivity of the observations to the neighboring emission areas. The constructed Eulerian adjoint was coupled with the Lagrangian component at a time boundary in the global domain. The simulations presented in this work were performed using the A-GELCA model in forward and adjoint modes. The forward simulation shows that the coupled model improves reproduction of the seasonal cycle and short-term variability of CO2. Mean bias and standard deviation for five of the six Siberian sites considered decrease roughly by 1 ppm when using the coupled model. The adjoint of the Eulerian model was shown, through several numerical tests, to be very accurate (within machine epsilon with mismatch around to ±6 e-14) compared to direct forward sensitivity calculations. The developed adjoint of the coupled model combines the flux conservation and stability of an Eulerian discrete adjoint formulation with the flexibility, accuracy, and high resolution of a Lagrangian backward trajectory formulation. A-GELCA will be incorporated into a variational inversion system designed to optimize surface fluxes of greenhouse gases.

  13. FSFE: Fake Spectra Flux Extractor

    NASA Astrophysics Data System (ADS)

    Bird, Simeon

    2017-10-01

    The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO.

  14. Dust-wall and dust-plasma interaction in the MIGRAINe code

    NASA Astrophysics Data System (ADS)

    Vignitchouk, L.; Tolias, P.; Ratynskaia, S.

    2014-09-01

    The physical models implemented in the recently developed dust dynamics code MIGRAINe are described. A major update of the treatment of secondary electron emission, stemming from models adapted to typical scrape-off layer temperatures, is reported. Sputtering and plasma species backscattering are introduced from fits of available experimental data and their relative importance to dust charging and heating is assessed in fusion-relevant scenarios. Moreover, the description of collisions between dust particles and plasma-facing components, based on the approximation of elastic-perfectly plastic adhesive spheres, has been upgraded to take into account the effects of particle size and temperature.

  15. GRMHD and GRPIC Simulations

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Mizuno, Y.; Watson, M.; Fuerst, S.; Wu, K.; Hardee, P.; Fishman, G. J.

    2007-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic (GRMHD) code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous code. The simulation results show the jet formations from a geometrically thin accretion disk near a nonrotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field configuration including issues for future research. A General Relativistic Particle-in-Cell Code (GRPIC) has been developed using the Kerr-Schild metric. The code includes kinetic effects, and is in accordance with GRMHD code. Since the gravitational force acting on particles is extreme near black holes, there are some difficulties in numerically describing these processes. The preliminary code consists of an accretion disk and free-falling corona. Results indicate that particles are ejected from the black hole. These results are consistent with other GRMHD simulations. The GRPIC simulation results will be presented, along with some remarks and future improvements. The emission is calculated from relativistic flows in black hole systems using a fully general relativistic radiative transfer formulation, with flow structures obtained by GRMHD simulations considering thermal free-free emission and thermal synchrotron emission. Bright filament-like features protrude (visually) from the accretion disk surface, which are enhancements of synchrotron emission where the magnetic field roughly aligns with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We would like to extend this research using GRPIC simulations and examine a possible new mechanism for certain X-ray quasi-periodic oscillations (QPOs) observed in blackhole X-ray binaries.

  16. Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu

    2014-01-01

    The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".

  17. Compressed Air Quality, A Case Study In Paiton Coal Fired Power Plant Unit 1 And 2

    NASA Astrophysics Data System (ADS)

    Indah, Nur; Kusuma, Yuriadi; Mardani

    2018-03-01

    The compressed air system becomes part of a very important utility system in a Plant, including the Steam Power Plant. In PLN’S coal fired power plant, Paiton units 1 and 2, there are four Centrifugal air compressor types, which produce compressed air as much as 5.652 cfm and with electric power capacity of 1200 kW. Electricity consumption to operate centrifugal compressor is 7.104.117 kWh per year. Compressed air generation is not only sufficient in quantity (flow rate) but also meets the required air quality standards. compressed air at Steam Power Plant is used for; service air, Instrument air, and for fly Ash. This study aims to measure some important parameters related to air quality, followed by potential disturbance analysis, equipment breakdown or reduction of energy consumption from existing compressed air conditions. These measurements include counting the number of dust particles, moisture content, relative humidity, and also compressed air pressure. From the measurements, the compressed air pressure generated by the compressor is about 8.4 barg and decreased to 7.7 barg at the furthest point, so the pressure drop is 0.63 barg, this number satisfies the needs in the end user. The measurement of the number of particles contained in compressed air, for particle of 0.3 micron reaches 170,752 particles, while for the particle size 0.5 micron reaches 45,245 particles. Measurements of particles conducted at several points of measurement. For some point measurements the number of dust particle exceeds the standard set by ISO 8573.1-2010 and also NACE Code, so it needs to be improved on the air treatment process. To see the amount of moisture content in compressed air, it is done by measuring pressure dew point temperature (PDP). Measurements were made at several points with results ranging from -28.4 to 30.9 °C. The recommendation of improving compressed air quality in steam power plant, Paiton unit 1 and 2 has the potential to extend the life of instrumentation equipment, improve the reliability of equipment, and reduce the amount of energy consumption up to 502,579 kWh per year.

  18. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.

  19. Radiation from advanced solid rocket motor plumes

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Smith, Sheldon D.; Myruski, Brian L.

    1994-01-01

    The overall objective of this study was to develop an understanding of solid rocket motor (SRM) plumes in sufficient detail to accurately explain the majority of plume radiation test data. Improved flowfield and radiation analysis codes were developed to accurately and efficiently account for all the factors which effect radiation heating from rocket plumes. These codes were verified by comparing predicted plume behavior with measured NASA/MSFC ASRM test data. Upon conducting a thorough review of the current state-of-the-art of SRM plume flowfield and radiation prediction methodology and the pertinent data base, the following analyses were developed for future design use. The NOZZRAD code was developed for preliminary base heating design and Al2O3 particle optical property data evaluation using a generalized two-flux solution to the radiative transfer equation. The IDARAD code was developed for rapid evaluation of plume radiation effects using the spherical harmonics method of differential approximation to the radiative transfer equation. The FDNS CFD code with fully coupled Euler-Lagrange particle tracking was validated by comparison to predictions made with the industry standard RAMP code for SRM nozzle flowfield analysis. The FDNS code provides the ability to analyze not only rocket nozzle flow, but also axisymmetric and three-dimensional plume flowfields with state-of-the-art CFD methodology. Procedures for conducting meaningful thermo-vision camera studies were developed.

  20. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization

    PubMed Central

    Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934

  1. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6 [Production of energetic light fragments in CEM, LAQGSM, and MCNP6

    DOE PAGES

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.; ...

    2017-03-23

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less

  2. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6 [Production of energetic light fragments in CEM, LAQGSM, and MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less

  3. Space Applications of the FLUKA Monte-Carlo Code: Lunar and Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Anderson, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Elkhayari, N.; Empl, A.; Fasso, A.; Ferrari, A.; hide

    2004-01-01

    NASA has recognized the need for making additional heavy-ion collision measurements at the U.S. Brookhaven National Laboratory in order to support further improvement of several particle physics transport-code models for space exploration applications. FLUKA has been identified as one of these codes and we will review the nature and status of this investigation as it relates to high-energy heavy-ion physics.

  4. Alpha particle confinement in tandem mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devoto, R.S.; Ohnishi, M.; Kerns, J.

    1980-10-10

    Mechanisms leading to loss of alpha particles from non-axisymmetric tandem mirrors are considered. Stochastic diffusion due to bounce-drift resonances, which can cause rapid radial losses of high-energy alpha particles, can be suppressed by imposing a 20% rise in axisymmetric fields before the quadrupole transition sections. Alpha particles should then be well-confined until thermal energies when they enter the resonant plateau require. A fast code for computation of drift behavior in reactors is described. Sample calculations are presented for resonant particles in a proposed coil set for the Tandem Mirror Next Step.

  5. Variability of the contrail radiative forcing due to crystal shape

    NASA Astrophysics Data System (ADS)

    Markowicz, K. M.; Witek, M. L.

    2011-12-01

    The aim of this study is to examine the influence of particles' shape and particles' optical properties on the contrail radiative forcing. Contrail optical properties in the shortwave and longwave range are derived using a ray-tracing geometric method and the discrete dipole approximation method, respectively. Both methods present good correspondence of the single scattering albedo and the asymmetry parameter in a transition range (3-7μm). We compare optical properties defined following simple 10 crystals habits randomly oriented: hexagonal plates, hexagonal columns with different aspect ratio, and spherical. There are substantial differences in single scattering properties between ten crystal models investigated here (e.g. hexagonal columns and plates with different aspect ratios, spherical particles). The single scattering albedo and the asymmetry parameter both vary up to 0.1 between various crystal shapes. Radiative forcing calculations were performed using a model which includes an interface between the state-of-the-art radiative transfer model Fu-Liou and databases containing optical properties of the atmosphere and surface reflectance and emissivity. This interface allows to determine radiative fluxes in the atmosphere and to estimate the contrail radiative forcing for clear- and all-sky (including natural clouds) conditions for various crystal shapes. The Fu-Liou code is fast and therefore it is suitable for computing radiative forcing on a global scale. At the same time it has sufficiently good accuracy for such global applications. A noticeable weakness of the Fu-Liou code is that it does not take into account the 3D radiative effects, e.g. cloud shading and horizontal. Radiative transfer model calculations were performed at horizontal resolution of 5x5 degree and time resolution of 20 min during day and 3 h during night. In order to calculate a geographic distribution of the global and annual mean contrail radiative forcing, the contrail cover must be determined. Two cases are discussed here: a 1% homogeneous contrail cover and the contrail cover provided by Rädel and Shine (2008). In the second distribution case, a more realistic contrail cover is taken into account. This model combines the AERO2K flight inventory with meteorological data and normalizes it with respect to the contrail cover derived from satellite observations. Simulations performed by the Fu-Liou model show significant variability of the shortwave, longwave, and net radiative forcing with crystal shape. The nonspherical crystals have smaller net forcing in contrary to spherical particles. The differences in net radiative forcing between optical models reach up to 50%. The hexagonal column and hexagonal plate particles show the smallest net radiative forcing while the largest forcing is obtained for the spheres. The global and annual mean shortwave, longwave, and net contrail radiative forcing, average over all crystal models and assuming an optical depth of 0.3 at visible wavelengths, is -5.7, 16.8, and 11.1 mW/m2, respectively. A ratio of the radiative forcings' standard deviation to the mean value, derived using 10 different ice particle models, is about 0.2 for the shortwave, 0.14 for the longwave, and 0.23 for the net radiation.

  6. Turbulent Radiation Effects in HSCT Combustor Rich Zone

    NASA Technical Reports Server (NTRS)

    Hall, Robert J.; Vranos, Alexander; Yu, Weiduo

    1998-01-01

    A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.

  7. WOLF: a computer code package for the calculation of ion beam trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, D.L.

    1985-10-01

    The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this codemore » and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.« less

  8. ALPHACAL: A new user-friendly tool for the calibration of alpha-particle sources.

    PubMed

    Timón, A Fernández; Vargas, M Jurado; Gallardo, P Álvarez; Sánchez-Oro, J; Peralta, L

    2018-05-01

    In this work, we present and describe the program ALPHACAL, specifically developed for the calibration of alpha-particle sources. It is therefore more user-friendly and less time-consuming than multipurpose codes developed for a wide range of applications. The program is based on the recently developed code AlfaMC, which simulates specifically the transport of alpha particles. Both cylindrical and point sources mounted on the surface of polished backings can be simulated, as is the convention in experimental measurements of alpha-particle sources. In addition to the efficiency calculation and determination of the backscattering coefficient, some additional tools are available to the user, like the visualization of energy spectrum, use of energy cut-off or low-energy tail corrections. ALPHACAL has been implemented in C++ language using QT library, so it is available for Windows, MacOs and Linux platforms. It is free and can be provided under request to the authors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Assessment of Microphysical Models in the National Combustion Code (NCC) for Aircraft Particulate Emissions: Particle Loss in Sampling Lines

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2008-01-01

    This paper at first describes the fluid network approach recently implemented into the National Combustion Code (NCC) for the simulation of transport of aerosols (volatile particles and soot) in the particulate sampling systems. This network-based approach complements the other two approaches already in the NCC, namely, the lower-order temporal approach and the CFD-based approach. The accuracy and the computational costs of these three approaches are then investigated in terms of their application to the prediction of particle losses through sample transmission and distribution lines. Their predictive capabilities are assessed by comparing the computed results with the experimental data. The present work will help establish standard methodologies for measuring the size and concentration of particles in high-temperature, high-velocity jet engine exhaust. Furthermore, the present work also represents the first step of a long term effort of validating physics-based tools for the prediction of aircraft particulate emissions.

  10. DIAPHANE: A portable radiation transport library for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Reed, Darren S.; Dykes, Tim; Cabezón, Rubén; Gheller, Claudio; Mayer, Lucio

    2018-05-01

    One of the most computationally demanding aspects of the hydrodynamical modelingof Astrophysical phenomena is the transport of energy by radiation or relativistic particles. Physical processes involving energy transport are ubiquitous and of capital importance in many scenarios ranging from planet formation to cosmic structure evolution, including explosive events like core collapse supernova or gamma-ray bursts. Moreover, the ability to model and hence understand these processes has often been limited by the approximations and incompleteness in the treatment of radiation and relativistic particles. The DIAPHANE project has focused on developing a portable and scalable library that handles the transport of radiation and particles (in particular neutrinos) independently of the underlying hydrodynamic code. In this work, we present the computational framework and the functionalities of the first version of the DIAPHANE library, which has been successfully ported to three different smoothed-particle hydrodynamic codes, GADGET2, GASOLINE and SPHYNX. We also present validation of different modules solving the equations of radiation and neutrino transport using different numerical schemes.

  11. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iandola, F N; O'Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improvesmore » usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.« less

  12. Monte Carlo Calculations of Suprathermal Alpha Particles Trajectories in the Rippled Field of TFTR

    NASA Astrophysics Data System (ADS)

    Punjabi, Alkesh; Lam, Maria; Boozer, Allen

    1996-11-01

    We study the transport of suprathermal alpha particles and their energy deposition into electrons, deuterons, tritons and carbon-12 impurity in the rippled field of TFTR. The Monte Carlo code (Punjabi A., Boozer A., Lam M., Kim M., and Burke K., J. Plasma Phys.), 44, 405 (1990) developed by Punjabi and Boozer for the transport of plasma particles due to MHD modes in toroidal plasmas is used in conjunction with the SHAF code (White R. B., and Boozer A., PPPL -3094) (1995) of White. we integrate drift Hamiltonian equation of motion in non-canonical, rectangular, Boozer coordinates. The deposition of alpha energy into electrons, deuterons, tritons and C-12 particles is calculated and recorded. The effects of energy and pitch angle scattering are included. The result of this study will be presented. This work is supported by the US DOE. The assistance provided by Professors R. B. White and S. Zweben of PPPL is gratefully acknowledged.

  13. Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.

    PubMed

    McGregor, T J; Spence, D J; Coutts, D W

    2008-01-01

    We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.

  14. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  15. LIDT-DD: A new self-consistent debris disc model that includes radiation pressure and couples dynamical and collisional evolution

    NASA Astrophysics Data System (ADS)

    Kral, Q.; Thébault, P.; Charnoz, S.

    2013-10-01

    Context. In most current debris disc models, the dynamical and the collisional evolutions are studied separately with N-body and statistical codes, respectively, because of stringent computational constraints. In particular, incorporating collisional effects (especially destructive collisions) into an N-body scheme has proven a very arduous task because of the exponential increase of particles it would imply. Aims: We present here LIDT-DD, the first code able to mix both approaches in a fully self-consistent way. Our aim is for it to be generic enough to be applied to any astrophysical case where we expect dynamics and collisions to be deeply interlocked with one another: planets in discs, violent massive breakups, destabilized planetesimal belts, bright exozodiacal discs, etc. Methods: The code takes its basic architecture from the LIDT3D algorithm for protoplanetary discs, but has been strongly modified and updated to handle the very constraining specificities of debris disc physics: high-velocity fragmenting collisions, radiation-pressure affected orbits, absence of gas that never relaxes initial conditions, etc. It has a 3D Lagrangian-Eulerian structure, where grains of a given size at a given location in a disc are grouped into super-particles or tracers whose orbits are evolved with an N-body code and whose mutual collisions are individually tracked and treated using a particle-in-a-box prescription designed to handle fragmenting impacts. To cope with the wide range of possible dynamics for same-sized particles at any given location in the disc, and in order not to lose important dynamical information, tracers are sorted and regrouped into dynamical families depending on their orbits. A complex reassignment routine that searches for redundant tracers in each family and reassignes them where they are needed, prevents the number of tracers from diverging. Results: The LIDT-DD code has been successfully tested on simplified cases for which robust results have been obtained in past studies: we retrieve the classical features of particle size distributions in unperturbed discs and the outer radial density profiles in ~r-1.5 outside narrow collisionally active rings as well as the depletion of small grains in dynamically cold discs. The potential of the new code is illustrated with the test case of the violent breakup of a massive planetesimal within a debris disc. Preliminary results show that we are able for the first time to quantify the timescale over which the signature of such massive break-ups can be detected. In addition to studying such violent transient events, the main potential future applications of the code are planet and disc interactions, and more generally, any configurations where dynamics and collisions are expected to be intricately connected.

  16. Multiuser Transmit Beamforming for Maximum Sum Capacity in Tactical Wireless Multicast Networks

    DTIC Science & Technology

    2006-08-01

    commonly used extended Kalman filter . See [2, 5, 6] for recent tutorial overviews. In particle filtering , continuous distributions are approximated by...signals (using and developing associated particle filtering tools). Our work on these topics has been reported in seven (IEEE, SIAM) journal papers and...multidimensional scaling, tracking, intercept, particle filters . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT 18. SECURITY CLASSIFICATION OF

  17. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  18. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  19. Estimating ice particle scattering properties using a modified Rayleigh-Gans approximation

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Verlinde, Johannes

    2014-09-01

    A modification to the Rayleigh-Gans approximation is made that includes self-interactions between different parts of an ice crystal, which both improves the accuracy of the Rayleigh-Gans approximation and extends its applicability to polarization-dependent parameters. This modified Rayleigh-Gans approximation is both efficient and reasonably accurate for particles with at least one dimension much smaller than the wavelength (e.g., dendrites at millimeter or longer wavelengths) or particles with sparse structures (e.g., low-density aggregates). Relative to the Generalized Multiparticle Mie method, backscattering reflectivities at horizontal transmit and receive polarization (HH) (ZHH) computed with this modified Rayleigh-Gans approach are about 3 dB more accurate than with the traditional Rayleigh-Gans approximation. For realistic particle size distributions and pristine ice crystals the modified Rayleigh-Gans approach agrees with the Generalized Multiparticle Mie method to within 0.5 dB for ZHH whereas for the polarimetric radar observables differential reflectivity (ZDR) and specific differential phase (KDP) agreement is generally within 0.7 dB and 13%, respectively. Compared to the A-DDA code, the modified Rayleigh-Gans approximation is several to tens of times faster if scattering properties for different incident angles and particle orientations are calculated. These accuracies and computational efficiencies are sufficient to make this modified Rayleigh-Gans approach a viable alternative to the Rayleigh-Gans approximation in some applications such as millimeter to centimeter wavelength radars and to other methods that assume simpler, less accurate shapes for ice crystals. This method should not be used on materials with dielectric properties much different from ice and on compact particles much larger than the wavelength.

  20. PDT - PARTICLE DISPLACEMENT TRACKING SOFTWARE

    NASA Technical Reports Server (NTRS)

    Wernet, M. P.

    1994-01-01

    Particle Imaging Velocimetry (PIV) is a quantitative velocity measurement technique for measuring instantaneous planar cross sections of a flow field. The technique offers very high precision (1%) directionally resolved velocity vector estimates, but its use has been limited by high equipment costs and complexity of operation. Particle Displacement Tracking (PDT) is an all-electronic PIV data acquisition and reduction procedure which is simple, fast, and easily implemented. The procedure uses a low power, continuous wave laser and a Charged Coupled Device (CCD) camera to electronically record the particle images. A frame grabber board in a PC is used for data acquisition and reduction processing. PDT eliminates the need for photographic processing, system costs are moderately low, and reduced data are available within seconds of acquisition. The technique results in velocity estimate accuracies on the order of 5%. The software is fully menu-driven from the acquisition to the reduction and analysis of the data. Options are available to acquire a single image or 5- or 25-field series of images separated in time by multiples of 1/60 second. The user may process each image, specifying its boundaries to remove unwanted glare from the periphery and adjusting its background level to clearly resolve the particle images. Data reduction routines determine the particle image centroids and create time history files. PDT then identifies the velocity vectors which describe the particle movement in the flow field. Graphical data analysis routines are included which allow the user to graph the time history files and display the velocity vector maps, interpolated velocity vector grids, iso-velocity vector contours, and flow streamlines. The PDT data processing software is written in FORTRAN 77 and the data acquisition routine is written in C-Language for 80386-based IBM PC compatibles running MS-DOS v3.0 or higher. Machine requirements include 4 MB RAM (3 MB Extended), a single or multiple frequency RGB monitor (EGA or better), a math co-processor, and a pointing device. The printers supported by the graphical analysis routines are the HP Laserjet+, Series II, and Series III with at least 1.5 MB memory. The data acquisition routines require the EPIX 4-MEG video board and optional 12.5MHz oscillator, and associated EPIX software. Data can be acquired from any CCD or RS-170 compatible video camera with pixel resolution of 600hX400v or better. PDT is distributed on one 5.25 inch 360K MS-DOS format diskette. Due to the use of required proprietary software, executable code is not provided on the distribution media. Compiling the source code requires the Microsoft C v5.1 compiler, Microsoft QuickC v2.0, the Microsoft Mouse Library, EPIX Image Processing Libraries, the Microway NDP-Fortran-386 v2.1 compiler, and the Media Cybernetics HALO Professional Graphics Kernal System. Due to the complexities of the machine requirements, COSMIC strongly recommends the purchase and review of the documentation prior to the purchase of the program. The source code, and sample input and output files are provided in PKZIP format; the PKUNZIP utility is included. PDT was developed in 1990. All trade names used are the property of their respective corporate owners.

  1. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN.

    PubMed

    Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E

    2013-10-21

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  2. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.

    2013-10-01

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  3. Development of a 3D numerical code to calculate the trajectories of the blow off electrons emitted by a vacuum surface discharge: Application to the study of the electromagnetic interference induced on a spacecraft

    NASA Astrophysics Data System (ADS)

    Froger, Etienne

    1993-05-01

    A description of the electromagnetic behavior of a satellite subjected to an electric discharge is given using a specially developed numerical code. One of the particularities of vacuum discharges, obtained by irradiation of polymers, is the intense emission of electrons into the spacecraft environment. Electromagnetic radiation, associated with the trajectories of the particles around the spacecraft, is considered as the main source of the interference observed. In the absence of accurate orbital data and realistic ground tests, the assessment of these effects requires numerical simulation of the interaction between this electron source and the spacecraft. This is done by the GEODE particle code which is applied to characteristic configurations in order to estimate the spacecraft response to a discharge, which is simulated from a vacuum discharge model designed in laboratory. The spacecraft response to a current injection is simulated by the ALICE numerical three dimensional code. The comparison between discharge and injection effects, from the results given by the two codes, illustrates the representativity of electromagnetic susceptibility tests and the main parameters for their definition.

  4. Cloud-based design of high average power traveling wave linacs

    NASA Astrophysics Data System (ADS)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  5. Comparison of measured and computed phase functions of individual tropospheric ice crystals

    NASA Astrophysics Data System (ADS)

    Stegmann, Patrick G.; Tropea, Cameron; Järvinen, Emma; Schnaiter, Martin

    2016-07-01

    Airplanes passing the incuda (lat. anvils) regions of tropical cumulonimbi-clouds are at risk of suffering an engine power-loss event and engine damage due to ice ingestion (Mason et al., 2006 [1]). Research in this field relies on optical measurement methods to characterize ice crystals; however the design and implementation of such methods presently suffer from the lack of reliable and efficient means of predicting the light scattering from ice crystals. The nascent discipline of direct measurement of phase functions of ice crystals in conjunction with particle imaging and forward modelling through geometrical optics derivative- and Transition matrix-codes for the first time allow us to obtain a deeper understanding of the optical properties of real tropospheric ice crystals. In this manuscript, a sample phase function obtained via the Particle Habit Imaging and Polar Scattering (PHIPS) probe during a measurement campaign in flight over Brazil will be compared to three different light scattering codes. This includes a newly developed first order geometrical optics code taking into account the influence of the Gaussian beam illumination used in the PHIPS device, as well as the reference ray tracing code of Macke and the T-matrix code of Kahnert.

  6. An electrostatic Particle-In-Cell code on multi-block structured meshes

    NASA Astrophysics Data System (ADS)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David

    2017-12-01

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.

  7. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE PAGES

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...

    2017-09-14

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  8. Revised Extended Grid Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, Roger L.

    The Revised Eolus Grid Library (REGL) is a mesh-tracking library that was developed for use with the MCNP6TM computer code so that (radiation) particles can track on an unstructured mesh. The unstructured mesh is a finite element representation of any geometric solid model created with a state-of-the-art CAE/CAD tool. The mesh-tracking library is written using modern Fortran and programming standards; the library is Fortran 2003 compliant. The library was created with a defined application programmer interface (API) so that it could easily integrate with other particle tracking/transport codes. The library does not handle parallel processing via the message passing interfacemore » (mpi), but has been used successfully where the host code handles the mpi calls. The library is thread-safe and supports the OpenMP paradigm. As a library, all features are available through the API and overall a tight coupling between it and the host code is required. Features of the library are summarized with the following list: Can accommodate first and second order 4, 5, and 6-sided polyhedra; any combination of element types may appear in a single geometry model; parts may not contain tetrahedra mixed with other element types; pentahedra and hexahedra can be together in the same part; robust handling of overlaps and gaps; tracks element-to-element to produce path length results at the element level; finds element numbers for a given mesh location; finds intersection points on element faces for the particle tracks; produce a data file for post processing results analysis; reads Abaqus .inp input (ASCII) files to obtain information for the global mesh-model; supports parallel input processing via mpi; and support parallel particle transport by both mpi and OpenMP.« less

  9. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    NASA Astrophysics Data System (ADS)

    Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter

    2012-08-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.

  10. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  11. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  12. Differential Cross Section Kinematics for 3-dimensional Transport Codes

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Dick, Frank

    2008-01-01

    In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.

  13. WE-H-BRA-09: Application of a Modified Microdosimetric-Kinetic Model to Analyze Relative Biological Effectiveness of Ions Relevant to Light Ion Therapy Using the Particle Heavy Ion Transport System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butkus, M; Palmer, T

    Purpose: To evaluate the dose and biological effectiveness of various ions that could potentially be used for actively scanned particle therapy. Methods: The PHITS Monte Carlo code paired with a microscopic analytical function was used to determine probability distribution functions of the lineal energy in 0.3µm diameter spheres throughout a water phantom. Twenty million primary particles for 1H beams and ten million particles for 4He, 7Li, 10B, 12C, 14N, 16O, and 20Ne were simulated for 0.6cm diameter pencil beams. Beam energies corresponding to Bragg peak depths of 50, 100, 150, 200, 250, and 300mm were used and evaluated transversely everymore » millimeter and radially in annuli with outer radius of 1.0, 2.0, 3.0, 3.2, 3.4, 3.6, 4.0, 5.0, 10.0, 15.0, 20.0 and 25.0mm. The acquired probability distributions were reduced to dose-mean lineal energies and applied to the modified microdosimetric kinetic model for five different cell types to calculate relative biological effectiveness (RBE) compared to 60Co beams at the 10% survival threshold. The product of the calculated RBEs and the simulated physical dose was taken to create biological dose and comparisons were then made between the various ions. Results: Transversely, the 10B beam was seen to minimize relative biological dose in both the constant and accelerated dose change regions, proximal to the Bragg Peak, for all beams traveling greater than 50mm. For the 50mm beam, 7Li was seen to provide the most optimal biological dose profile. Radially small fluctuations (<4.2%) were seen in RBE while physical dose was greater than 1% for all beams. Conclusion: Even with the growing usage of 12C, it may not be the most optimal ion in all clinical situations. Boron was calculated to have slightly enhanced RBE characteristics, leading to lower relative biological doses.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, H; Yoon, D; Jung, J

    Purpose: The purpose of this study is to suggest a tumor monitoring technique using prompt gamma rays emitted during the reaction between an antiproton and a boron particle, and to verify the increase of the therapeutic effectiveness of the antiproton boron fusion therapy using Monte Carlo simulation code. Methods: We acquired the percentage depth dose of the antiproton beam from a water phantom with and without three boron uptake regions (region A, B, and C) using F6 tally of MCNPX. The tomographic image was reconstructed using prompt gamma ray events from the reaction between the antiproton and boron during themore » treatment from 32 projections (reconstruction algorithm: MLEM). For the image reconstruction, we were performed using a 80 × 80 pixel matrix with a pixel size of 5 mm. The energy window was set as a 10 % energy window. Results: The prompt gamma ray peak for imaging was observed at 719 keV in the energy spectrum using the F8 tally fuction (energy deposition tally) of the MCNPX code. The tomographic image shows that the boron uptake regions were successfully identified from the simulation results. In terms of the receiver operating characteristic curve analysis, the area under the curve values were 0.647 (region A), 0.679 (region B), and 0.632 (region C). The SNR values increased as the tumor diameter increased. The CNR indicated the relative signal intensity within different regions. The CNR values also increased as the different of BURs diamter increased. Conclusion: We confirmed the feasibility of tumor monitoring during the antiproton therapy as well as the superior therapeutic effect of the antiproton boron fusion therapy. This result can be beneficial for the development of a more accurate particle therapy.« less

  15. A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport

    NASA Astrophysics Data System (ADS)

    Tautz, R. C.

    2016-05-01

    A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.

  16. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, Patrick; Dawson, Shawn; McKinley, Scott

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less

  17. Particle Count Limits Recommendation for Aviation Fuel

    DTIC Science & Technology

    2015-10-05

    Particle Counter Methodology • Particle counts are taken utilizing calibration methodologies and standardized cleanliness code ratings – ISO 11171 – ISO...Limits Receipt Vehicle Fuel Tank Fuel Injector Aviation Fuel DEF (AUST) 5695B 18/16/13 Parker 18/16/13 14/10/7 Pamas / Parker / Particle Solutions 19/17...12 U.S. DOD 19/17/14/13* Diesel Fuel World Wide Fuel Charter 5th 18/16/13 DEF (AUST) 5695B 18/16/13 Caterpillar 18/16/13 Detroit Diesel 18/16/13 MTU

  18. Exhaust plume impingement of chemically reacting gas-particle flows

    NASA Technical Reports Server (NTRS)

    Smith, S. D.; Penny, M. M.; Greenwood, T. F.; Roberts, B. B.

    1975-01-01

    A series of computer codes has been developed to predict gas-particle flows and resulting impingement forces, moments and heating rates to surfaces immersed in the flow. The gas-particle flow solution is coupled via heat transfer and drag between the phases with chemical effects included in the gas phase. The flow solution and impingement calculations are discussed. Analytical results are compared with test data obtained to evaluate gas-particle effects on the Space Shuttle thermal protection system during the staging maneuver.

  19. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  20. PHITS simulations of the Matroshka experiment

    NASA Astrophysics Data System (ADS)

    Gustafsson, Katarina; Sihver, Lembit; Mancusi, Davide; Sato, Tatsuhiko

    In order to design a more secure space exploration, radiation exposure estimations are necessary; the radiation environment in space is very different from the one on Earth and it is harmful for humans and for electronic equipments. The threat origins from two sources: Galactic Cosmic Rays and Solar Particle Events. It is important to understand what happens when these particles strike matter such as space vehicle walls, human organs and electronics. We are therefore developing a tool able to estimate the radiation exposure to both humans and electronics. The tool will be based on PHITS, the Particle and Heavy-Ion Transport code System, a three dimensional Monte Carlo code which can calculate interactions and transport of particles and heavy ions in matter. PHITS is developed by a collaboration between RIST (Research Organization for Information Science & Technology), JAEA (Japan Atomic Energy Agency), KEK (High Energy Accelerator Research Organization), Japan and Chalmers University of Technology, Sweden. A method for benchmarking and developing the code is to simulate experiments performed in space or on Earth. We have carried out simulations of the Matroshka experiment which focus on determining the radiation load on astronauts inside and outside the International Space Station by using a torso of a tissue equivalent human phantom, filled with active and passive detectors located in the positions of critical tissues and organs. We will present status and results of our simulations.

  1. Computational fluid dynamics assessment: Volume 1, Computer simulations of the METC (Morgantown Energy Technology Center) entrained-flow gasifier: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celik, I.; Chattree, M.

    1988-07-01

    An assessment of the theoretical and numerical aspects of the computer code, PCGC-2, is made; and the results of the application of this code to the Morgantown Energy Technology Center (METC) advanced gasification facility entrained-flow reactor, ''the gasifier,'' are presented. PCGC-2 is a code suitable for simulating pulverized coal combustion or gasification under axisymmetric (two-dimensional) flow conditions. The governing equations for the gas and particulate phase have been reviewed. The numerical procedure and the related programming difficulties have been elucidated. A single-particle model similar to the one used in PCGC-2 has been developed, programmed, and applied to some simple situationsmore » in order to gain insight to the physics of coal particle heat-up, devolatilization, and char oxidation processes. PCGC-2 was applied to the METC entrained-flow gasifier to study numerically the flash pyrolysis of coal, and gasification of coal with steam or carbon dioxide. The results from the simulations are compared with measurements. The gas and particle residence times, particle temperature, and mass component history were also calculated and the results were analyzed. The results provide useful information for understanding the fundamentals of coal gasification and for assessment of experimental results performed using the reactor considered. 69 refs., 35 figs., 23 tabs.« less

  2. Collisionless stellar hydrodynamics as an efficient alternative to N-body methods

    NASA Astrophysics Data System (ADS)

    Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard

    2013-01-01

    The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.

  3. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  4. Experimental study and discrete element method simulation of Geldart Group A particles in a small-scale fluidized bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Rabha, Swapna; Verma, Vikrant

    Geldart Group A particles are of great importance in various chemical processes because of advantages such as ease of fluidization, large surface area, and many other unique properties. It is very challenging to model the fluidization behavior of such particles as widely reported in the literature. In this study, a pseudo-2D experimental column with a width of 5 cm, a height of 45 cm, and a depth of 0.32 cm was developed for detailed measurements of fluidized bed hydrodynamics of fine particles to facilitate the validation of computational fluid dynamic (CFD) modeling. The hydrodynamics of sieved FCC particles (Sauter meanmore » diameter of 148 µm and density of 1300 kg/m3) and NETL-32D sorbents (Sauter mean diameter of 100 µm and density of 480 kg/m3) were investigated mainly through the visualization by a high-speed camera. Numerical simulations were then conducted by using NETL’s open source code MFIX-DEM. Both qualitative and quantitative information including bed expansion, bubble characteristics, and solid movement were compared between the numerical simulations and the experimental measurement. Furthermore, the cohesive van der Waals force was incorporated in the MFIX-DEM simulations and its influences on the flow hydrodynamics were studied.« less

  5. Experimental study and discrete element method simulation of Geldart Group A particles in a small-scale fluidized bed

    DOE PAGES

    Li, Tingwen; Rabha, Swapna; Verma, Vikrant; ...

    2017-09-19

    Geldart Group A particles are of great importance in various chemical processes because of advantages such as ease of fluidization, large surface area, and many other unique properties. It is very challenging to model the fluidization behavior of such particles as widely reported in the literature. In this study, a pseudo-2D experimental column with a width of 5 cm, a height of 45 cm, and a depth of 0.32 cm was developed for detailed measurements of fluidized bed hydrodynamics of fine particles to facilitate the validation of computational fluid dynamic (CFD) modeling. The hydrodynamics of sieved FCC particles (Sauter meanmore » diameter of 148 µm and density of 1300 kg/m3) and NETL-32D sorbents (Sauter mean diameter of 100 µm and density of 480 kg/m3) were investigated mainly through the visualization by a high-speed camera. Numerical simulations were then conducted by using NETL’s open source code MFIX-DEM. Both qualitative and quantitative information including bed expansion, bubble characteristics, and solid movement were compared between the numerical simulations and the experimental measurement. Furthermore, the cohesive van der Waals force was incorporated in the MFIX-DEM simulations and its influences on the flow hydrodynamics were studied.« less

  6. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams.

    PubMed

    Lourenço, Ana; Thomas, Russell; Bouchard, Hugo; Kacperek, Andrzej; Vondracek, Vladimir; Royle, Gary; Palmans, Hugo

    2016-07-01

    The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the fluka code [A. Ferrari et al., "fluka: A multi-particle transport code," in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., "The fluka Code: Developments and challenges for high energy and medical applications," Nucl. Data Sheets 120, 211-214 (2014)], to partial fluence corrections measured experimentally. A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary particle fluence. A correction factor, F(d), has been established to relate fluence corrections defined theoretically to partial fluence corrections derived experimentally. The findings presented here are also relevant to water and tissue-equivalent-plastic materials given their carbon content.

  7. Quantum Engineering of Dynamical Gauge Fields on Optical Lattices

    DTIC Science & Technology

    2016-07-08

    exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian

  8. Polydisperse particle-driven gravity currents in non-rectangular cross section channels

    NASA Astrophysics Data System (ADS)

    Zemach, T.

    2018-01-01

    We consider a high-Reynolds-number gravity current generated by polydisperse suspension of n types of particles distributed in a fluid of density ρi. Each class of particles in suspension has a different settling velocity. The current propagates along a channel of non-rectangular cross section into an ambient fluid of constant density ρa. The bottom and top of the channel are at z = 0, H, and the cross section is given by the quite general form -f1(z) ≤ y ≤ f2(z) for 0 ≤ z ≤ H. The flow is modeled by the one-layer shallow-water equations obtained for the time-dependent motion. We solve the problem by a finite-difference numerical code to present typical height h, velocity u, and mass fractions of particle (concentrations) (ϕ( j), j = 1, …, n) profiles. The runout length of suspensions in channels of power-law cross sections is analytically predicted using a simplified depth-averaged "box" model. We demonstrate that any degree of polydispersivity adds to the runout length of the currents, relative to that of equivalent monodisperse currents with an average settling velocity. The theoretical predictions are supported by the available experimental data. The present approach is a significant generalization of the particle-driven gravity current problem: on the one hand, now the monodisperse current in non-rectangular channels is a particular case of n = 1. On the other hand, the classical formulation of polydisperse currents for a rectangular channel is now just a particular case, f(z) = const., in the wide domain of cross sections covered by this new model.

  9. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    NASA Astrophysics Data System (ADS)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical elements mobilised by the water-minerals interaction processes. To validate our model we simulated the compacted bentonite (MX80) studied for engineered barriers for radioactive waste confinement and mainly composed of Na-Ca-montmorillonite. The study of particles morphology and reactive surfaces evolutions reveals that aqueous ions have a complex behaviour, especially when competitions between various mineral phases occur. In that case, our model predicts a preferential precipitation of finest particles, favouring smectites instead of zeolites. This work is a part of a PhD Thesis supported by Andra, the French Radioactive Waste Management Agency.

  10. KEWPIE: A dynamical cascade code for decaying exited compound nuclei

    NASA Astrophysics Data System (ADS)

    Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David

    2004-05-01

    A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.

  11. Direct collapse to supermassive black hole seeds: comparing the AMR and SPH approaches.

    PubMed

    Luo, Yang; Nagamine, Kentaro; Shlosman, Isaac

    2016-07-01

    We provide detailed comparison between the adaptive mesh refinement (AMR) code enzo-2.4 and the smoothed particle hydrodynamics (SPH)/ N -body code gadget-3 in the context of isolated or cosmological direct baryonic collapse within dark matter (DM) haloes to form supermassive black holes. Gas flow is examined by following evolution of basic parameters of accretion flows. Both codes show an overall agreement in the general features of the collapse; however, many subtle differences exist. For isolated models, the codes increase their spatial and mass resolutions at different pace, which leads to substantially earlier collapse in SPH than in AMR cases due to higher gravitational resolution in gadget-3. In cosmological runs, the AMR develops a slightly higher baryonic resolution than SPH during halo growth via cold accretion permeated by mergers. Still, both codes agree in the build-up of DM and baryonic structures. However, with the onset of collapse, this difference in mass and spatial resolution is amplified, so evolution of SPH models begins to lag behind. Such a delay can have effect on formation/destruction rate of H 2 due to UV background, and on basic properties of host haloes. Finally, isolated non-cosmological models in spinning haloes, with spin parameter λ ∼ 0.01-0.07, show delayed collapse for greater λ, but pace of this increase is faster for AMR. Within our simulation set-up, gadget-3 requires significantly larger computational resources than enzo-2.4 during collapse, and needs similar resources, during the pre-collapse, cosmological structure formation phase. Yet it benefits from substantially higher gravitational force and hydrodynamic resolutions, except at the end of collapse.

  12. Direct collapse to supermassive black hole seeds: comparing the AMR and SPH approaches

    NASA Astrophysics Data System (ADS)

    Luo, Yang; Nagamine, Kentaro; Shlosman, Isaac

    2016-07-01

    We provide detailed comparison between the adaptive mesh refinement (AMR) code ENZO-2.4 and the smoothed particle hydrodynamics (SPH)/N-body code GADGET-3 in the context of isolated or cosmological direct baryonic collapse within dark matter (DM) haloes to form supermassive black holes. Gas flow is examined by following evolution of basic parameters of accretion flows. Both codes show an overall agreement in the general features of the collapse; however, many subtle differences exist. For isolated models, the codes increase their spatial and mass resolutions at different pace, which leads to substantially earlier collapse in SPH than in AMR cases due to higher gravitational resolution in GADGET-3. In cosmological runs, the AMR develops a slightly higher baryonic resolution than SPH during halo growth via cold accretion permeated by mergers. Still, both codes agree in the build-up of DM and baryonic structures. However, with the onset of collapse, this difference in mass and spatial resolution is amplified, so evolution of SPH models begins to lag behind. Such a delay can have effect on formation/destruction rate of H2 due to UV background, and on basic properties of host haloes. Finally, isolated non-cosmological models in spinning haloes, with spin parameter λ ˜ 0.01-0.07, show delayed collapse for greater λ, but pace of this increase is faster for AMR. Within our simulation set-up, GADGET-3 requires significantly larger computational resources than ENZO-2.4 during collapse, and needs similar resources, during the pre-collapse, cosmological structure formation phase. Yet it benefits from substantially higher gravitational force and hydrodynamic resolutions, except at the end of collapse.

  13. Development and validation of a critical gradient energetic particle driven Alfven eigenmode transport model for DIII-D tilted neutral beam experiments

    DOE PAGES

    Waltz, Ronald E.; Bass, Eric M.; Heidbrink, William W.; ...

    2015-10-30

    Recent experiments with the DIII-D tilted neutral beam injection (NBI) varying the beam energetic particle (EP) source profiles have provided strong evidence that unstable Alfven eigenmodes (AE) drive stiff EP transport at a critical EP density gradient. Here the critical gradient is identified by the local AE growth rate being equal to the local ITG/TEM growth rate at the same low toroidal mode number. The growth rates are taken from the gyrokinetic code GYRO. Simulation show that the slowing down beam-like EP distribution has a slightly lower critical gradient than the Maxwellian. The ALPHA EP density transport code, used tomore » validate the model, combines the low-n stiff EP critical density gradient AE mid-core transport with the energy independent high-n ITG/TEM density transport model controling the central core EP density profile. For the on-axis NBI heated DIII-D shot 146102, while the net loss to the edge is small, about half the birth fast ions are transported from the central core r/a < 0.5 and the central density is about half the slowing down density. Lastly, these results are in good agreement with experimental fast ion pressure profiles inferred from MSE constrained EFIT equilibria.« less

  14. A microwave FEL (free electron laser) code using waveguide modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byers, J.A.; Cohen, R.H.

    1987-08-01

    A free electron laser code, GFEL, is being developed for application to the LLNL tokamak current drive experiment, MTX. This single frequency code solves for the slowly varying complex field amplitude using the usual wiggler-averaged equations of existing codes, in particular FRED, except that it describes the fields by a 2D expansion in the rectangular waveguide modes, using coupling coefficients similar to those developed by Wurtele, which include effects of spatial variations in the fields seen by the wiggler motion of the particles. Our coefficients differ from those of Wurtele in two respects. First, we have found a missing ..sqrt..2..gamma../a/submore » w/ factor in his C/sub z/; when corrected this increases the effect of the E/sub z/ field component and this in turn reduces the amplitude of the TM mode. Second, we have consistently retained all terms of second order in the wiggle amplitude. Both corrections are necessary for accurate computation. GFEL has the capability of following the TE/sub 0n/ and TE(M)/sub m1/ modes simultaneously. GFEL produces results nearly identical to those from FRED if the coupling coefficients are adjusted to equal those implied by the algorithm in FRED. Normally, the two codes produce results that are similar but different in detail due to the different treatment of modes higher than TE/sub 01/. 5 refs., 2 figs., 1 tab.« less

  15. Motion of dust particles in nonuniform magnetic field and applicability of smoothed particle hydrodynamics simulation

    NASA Astrophysics Data System (ADS)

    Saitou, Y.

    2018-01-01

    An SPH (Smoothed Particle Hydrodynamics) simulation code is developed to reproduce our findings on behavior of dust particles, which were obtained in our previous experiments (Phys. Plasmas, 23, 013709 (2016) and Abst. 18th Intern. Cong. Plasma Phys. (Kaohsiung, 2016)). Usually, in an SPH simulation, a smoothed particle is interpreted as a discretized fluid element. Here we regard the particles as dust particles because it is known that behavior of dust particles in complex plasmas can be described using fluid dynamics equations in many cases. Various rotation velocities that are difficult to achieve in the experiment are given to particles at boundaries in the newly developed simulation and motion of particles is investigated. Preliminary results obtained by the simulation are shown.

  16. Some Developments of the Equilibrium Particle Simulation Method for the Direct Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Macrossan, M. N.

    1995-01-01

    The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.

  17. EVIDENCE FOR ENHANCED {sup 3}HE IN FLARE-ACCELERATED PARTICLES BASED ON NEW CALCULATIONS OF THE GAMMA-RAY LINE SPECTRUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, R. J.; Kozlovsky, B.; Share, G. H., E-mail: murphy@ssd5.nrl.navy.mil, E-mail: benz@wise.tau.ac.il, E-mail: share@astro.umd.edu

    2016-12-20

    The {sup 3}He abundance in impulsive solar energetic particle (SEP) events is enhanced up to several orders of magnitude compared to its photospheric value of [{sup 3}He]/[{sup 4}He] = 1–3 × 10{sup −4}. Interplanetary magnetic field and timing observations suggest that these events are related to solar flares. Observations of {sup 3}He in flare-accelerated ions would clarify the relationship between these two phenomena. Energetic {sup 3}He interactions in the solar atmosphere produce gamma-ray nuclear-deexcitation lines, both lines that are also produced by protons and α particles and lines that are essentially unique to {sup 3}He. Gamma-ray spectroscopy can, therefore, reveal enhanced levelsmore » of accelerated {sup 3}He. In this paper, we identify all significant deexcitation lines produced by {sup 3}He interactions in the solar atmosphere. We evaluate their production cross sections and incorporate them into our nuclear deexcitation-line code. We find that enhanced {sup 3}He can affect the entire gamma-ray spectrum. We identify gamma-ray line features for which the yield ratios depend dramatically on the {sup 3}He abundance. We determine the accelerated {sup 3}He/ α ratio by comparing these ratios with flux ratios measured previously from the gamma-ray spectrum obtained by summing the 19 strongest flares observed with the Solar Maximum Mission Gamma-Ray Spectrometer. All six flux ratios investigated show enhanced {sup 3}He, confirming earlier suggestions. The {sup 3}He/ α weighted mean of these new measurements ranges from 0.05 to 0.3 (depending on the assumed accelerated α /proton ratio) and has a <1 × 10{sup −3} probability of being consistent with the photospheric value. With the improved code, we can now exploit the full potential of gamma-ray spectroscopy to establish the relationship between flare-accelerated ions and {sup 3}He-rich SEPs.« less

  18. Entangled cloning of stabilizer codes and free fermions

    NASA Astrophysics Data System (ADS)

    Hsieh, Timothy H.

    2016-10-01

    Though the no-cloning theorem [Wooters and Zurek, Nature (London) 299, 802 (1982), 10.1038/299802a0] prohibits exact replication of arbitrary quantum states, there are many instances in quantum information processing and entanglement measurement in which a weaker form of cloning may be useful. Here, I provide a construction for generating an "entangled clone" for a particular but rather expansive and rich class of states. Given a stabilizer code or free fermion Hamiltonian, this construction generates an exact entangled clone of the original ground state, in the sense that the entanglement between the original and the exact copy can be tuned to be arbitrarily small but finite, or large, and the relation between the original and the copy can also be modified to some extent. For example, this Rapid Communication focuses on generating time-reversed copies of stabilizer codes and particle-hole transformed ground states of free fermion systems, although untransformed clones can also be generated. The protocol leverages entanglement to simulate a transformed copy of the Hamiltonian without having to physically implement it and can potentially be realized in superconducting qubits or ultracold atomic systems.

  19. Modeling carbon production and transport during ELMs in DIII-D

    NASA Astrophysics Data System (ADS)

    Hogan, J.; Wade, M.; Coster, D.; Lasnier, C.

    2004-11-01

    Large-scale Type I ELM events could provide a significant C source in ITER, and C production rates depend on incident D flux density and surface temperature, quantities which can vary significantly during an ELM event. Recent progress on DIII-D has improved opportunities for code comparison. Fast time-scale measurements of divertor CIII evolution [1] and fast edge CER measurements of C profile evolution during low-density DIII-D LSN ELMy H-modes (type I) [2] have been modeled using the solps5.0/Eirene99 coupled edge code and time dependent thermal analysis codes. An ELM model based on characteristics of MHD peeling-ballooning modes reproduces the pedestal evolution. Qualitative agreement for the CIII evolution during an ELM event is found using the Roth et al annealing model for chemical sputtering and the sensitivity to other models is described. Significant ELM-to-ELM variations in observed maximum divertor target IR temperature during nominally identical ELMs are investigated with models for C emission from micron-scale dust particles. [1] M Groth, M Fenstermacher et al J Nucl Mater 2003, [2] M Wade, K Burrell et al PSI-16

  20. Numerical simulation support to the ESA/THOR mission

    NASA Astrophysics Data System (ADS)

    Valentini, F.; Servidio, S.; Perri, S.; Perrone, D.; De Marco, R.; Marcucci, M. F.; Daniele, B.; Bruno, R.; Camporeale, E.

    2016-12-01

    THOR is a spacecraft concept currently undergoing study phase as acandidate for the next ESA medium size mission M4. THOR has been designedto solve the longstanding physical problems of particle heating andenergization in turbulent plasmas. It will provide high resolutionmeasurements of electromagnetic fields and particle distribution functionswith unprecedented resolution, with the aim of exploring the so-calledkinetic scales. We present the numerical simulation framework which is supporting the THOR mission during the study phase. The THOR teamincludes many scientists developing and running different simulation codes(Eulerian-Vlasov, Particle-In-Cell, Gyrokinetics, Two-fluid, MHD, etc.),addressing the physics of plasma turbulence, shocks, magnetic reconnectionand so on.These numerical codes are being used during the study phase, mainly withthe aim of addressing the following points:(i) to simulate the response of real particle instruments on board THOR, byemploying an electrostatic analyser simulator which mimics the response ofthe CSW, IMS and TEA instruments to the particle velocity distributions ofprotons, alpha particle and electrons, as obtained from kinetic numericalsimulations of plasma turbulence.(ii) to compare multi-spacecraft with single-spacecraft configurations inmeasuring current density, by making use of both numerical models ofsynthetic turbulence and real data from MMS spacecraft.(iii) to investigate the validity of the Taylor hypothesis indifferent configurations of plasma turbulence

Top