Sample records for problem particle simulations

  1. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  2. An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.

  3. The Million-Body Problem: Particle Simulations in Astrophysics

    ScienceCinema

    Rasio, Fred

    2018-05-21

    Computer simulations using particles play a key role in astrophysics. They are widely used to study problems across the entire range of astrophysical scales, from the dynamics of stars, gaseous nebulae, and galaxies, to the formation of the largest-scale structures in the universe. The 'particles' can be anything from elementary particles to macroscopic fluid elements, entire stars, or even entire galaxies. Using particle simulations as a common thread, this talk will present an overview of computational astrophysics research currently done in our theory group at Northwestern. Topics will include stellar collisions and the gravothermal catastrophe in dense star clusters.

  4. Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH

    NASA Astrophysics Data System (ADS)

    Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.

    2018-01-01

    Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.

  5. Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1999-01-01

    Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.

  6. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  7. PICsar: Particle in cell pulsar magnetosphere simulator

    NASA Astrophysics Data System (ADS)

    Belyaev, Mikhail A.

    2016-07-01

    PICsar simulates the magnetosphere of an aligned axisymmetric pulsar and can be used to simulate other arbitrary electromagnetics problems in axisymmetry. Written in Fortran, this special relativistic, electromagnetic, charge conservative particle in cell code features stretchable body-fitted coordinates that follow the surface of a sphere, simplifying the application of boundary conditions in the case of the aligned pulsar; a radiation absorbing outer boundary, which allows a steady state to be set up dynamically and maintained indefinitely from transient initial conditions; and algorithms for injection of charged particles into the simulation domain. PICsar is parallelized using MPI and has been used on research problems with 1000 CPUs.

  8. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.

    2017-11-01

    In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.

  9. Charging of particles on a surface

    NASA Astrophysics Data System (ADS)

    Heijmans, Lucas; Nijdam, Sander

    2016-09-01

    This contribution focusses on the seemingly easy problem of the charging of micrometer sized particles on a substrate in a plasma. This seems trivial, because much is known about both the charging of surfaces near a plasma and of particles in the plasma bulk. The problem, however, becomes much more complicated when the particle is on the substrate surface. The charging currents to the particle are then highly altered by the substrate plasma sheath. Currently there is no consensus in literature about the resulting particle charge. We shall present both experimental measurements and numerical simulations of the charge on these particles. The experimental results are acquired by measuring the particle acceleration in an external electric field. For the simulations we have used our specially developed model. We shall compare these results to other estimates found in literature.

  10. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  11. Simulations of dusty plasmas using a special-purpose computer system designed for gravitational N-body problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, K.; Mizuno, Y.; Hibino, S.

    2006-01-15

    Simulations of dusty plasmas were performed using GRAPE-6, a special-purpose computer designed for gravitational N-body problems. The collective behavior of dust particles, which are injected into the plasma, was studied by means of three-dimensional computer simulations. As an example of a dusty plasma simulation, experiments on Coulomb crystals in plasmas are simulated. Formation of a quasi-two-dimensional Coulomb crystal has been observed under typical laboratory conditions. Another example was to simulate movement of dust particles in plasmas under microgravity conditions. Fully three-dimensional spherical structures of dust clouds have been observed. For the simulation of a dusty plasma in microgravity with 3x10{supmore » 4} particles, GRAPE-6 can perform the whole operation 1000 times faster than by using a Pentium 4 1.6 GHz processor.« less

  12. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  13. Extension and Validation of a Hybrid Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 2

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Shivarama, Ravishankar

    2004-01-01

    The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.

  14. Coupling discrete and continuum concentration particle models for multiscale and hybrid molecular-continuum simulations

    NASA Astrophysics Data System (ADS)

    Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott

    2017-12-01

    Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely resolved (e.g., molecular dynamics) and coarse-grained (e.g., continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 084115 (2016)], simulated using a particle-based continuum method known as smoothed dissipative particle dynamics. An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.

  15. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE PAGES

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...

    2017-09-01

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  16. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  17. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    NASA Astrophysics Data System (ADS)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  18. Plasma Modeling with Speed-Limited Particle-in-Cell Techniques

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Werner, G. R.; Cary, J. R.; Stoltz, P. H.

    2017-10-01

    Speed-limited particle-in-cell (SLPIC) modeling is a new particle simulation technique for modeling systems wherein numerical constraints, e.g. limitations on timestep size required for numerical stability, are significantly more restrictive than is needed to model slower kinetic processes of interest. SLPIC imposes artificial speed-limiting behavior on fast particles whose kinetics do not play meaningful roles in the system dynamics, thus enabling larger simulation timesteps and more rapid modeling of such plasma discharges. The use of SLPIC methods to model plasma sheath formation and the free expansion of plasma into vacuum will be demonstrated. Wallclock times for these simulations, relative to conventional PIC, are reduced by a factor of 2.5 for the plasma expansion problem and by over 6 for the sheath formation problem; additional speedup is likely possible. Physical quantities of interest are shown to be correct for these benchmark problems. Additional SLPIC applications will also be discussed. Supported by US DoE SBIR Phase I/II Award DE-SC0015762.

  19. Numerical Study of Particle Damping Mechanism in Piston Vibration System via Particle Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Ming; Shah, Binoy; Keer, Leon; Wang, Jane; Snurr, Randall

    2008-03-01

    Mechanical damping systems with granular particles as the damping media have promising applications in extreme temperature conditions. In particle-based damping systems, the mechanical energy is dissipated through the inelastic collision and friction of particles. In the past, many experiments have been performed to investigate the particle damping problems. However, the detailed energy dissipation mechanism is still unclear due to the complex collision and flow behavior of dense particles. In this work, we use 3-D particle dynamics simulation to investigate the damping mechanism of an oscillating cylinder piston immerged in millimeter-size steel particles. The time evolution of the energy dissipation through the friction and inelastic collision is accurately monitored during the damping process. The contribution from the particle-particle interaction and particle-wall interaction is also separated for investigation. The effects of moisture, surface roughness, and density of particles are carefully investigated in the simulation. The comparison between the numerical simulation and experiment is also performed. The simulation results can help us understand the particle damping mechanism and design the new generation of particle damping devices.

  20. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  1. Microscale simulations of shock interaction with large assembly of particles for developing point-particle models

    NASA Astrophysics Data System (ADS)

    Thakur, Siddharth; Neal, Chris; Mehta, Yash; Sridharan, Prasanth; Jackson, Thomas; Balachandar, S.

    2017-01-01

    Micrsoscale simulations are being conducted for developing point-particle and other related models that are needed for the mesoscale and macroscale simulations of explosive dispersal of particles. These particle models are required to compute (a) instantaneous aerodynamic force on the particle and (b) instantaneous net heat transfer between the particle and the surrounding. A strategy for a sequence of microscale simulations has been devised that allows systematic development of the hybrid surrogate models that are applicable at conditions representative of the explosive dispersal application. The ongoing microscale simulations seek to examine particle force dependence on: (a) Mach number, (b) Reynolds number, and (c) volume fraction (different particle arrangements such as cubic, face-centered cubic (FCC), body-centered cubic (BCC) and random). Future plans include investigation of sequences of fully-resolved microscale simulations consisting of an array of particles subjected to more realistic time-dependent flows that progressively better approximate the actual problem of explosive dispersal. Additionally, effects of particle shape, size, and number in simulation as well as the transient particle deformation dependence on various parameters including: (a) particle material, (b) medium material, (c) multiple particles, (d) incoming shock pressure and speed, (e) medium to particle impedance ratio, (f) particle shape and orientation to shock, etc. are being investigated.

  2. The effects of particle loading on turbulence structure and modelling

    NASA Technical Reports Server (NTRS)

    Squires, Kyle D.; Eaton, J. K.

    1989-01-01

    The objective of the present research was to extend the Direct Numerical Simulation (DNS) approach to particle-laden turbulent flows using a simple model of particle/flow interaction. The program addressed the simplest type of flow, homogeneous, isotropic turbulence, and examined interactions between the particles and gas phase turbulence. The specific range of problems examined include those in which the particle is much smaller than the smallest length scales of the turbulence yet heavy enough to slip relative to the flow. The particle mass loading is large enough to have a significant impact on the turbulence, while the volume loading was small enough such that particle-particle interactions could be neglected. Therefore, these simulations are relevant to practical problems involving small, dense particles conveyed by turbulent gas flows at moderate loadings. A sample of the results illustrating modifications of the particle concentration field caused by the turbulence structure is presented and attenuation of turbulence by the particle cloud is also illustrated.

  3. Numerical simulations of the charged-particle flow dynamics for sources with a curved emission surface

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.

    2016-12-01

    The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.

  4. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  5. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  6. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  7. Particle simulation of plasmas and stellar systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, T.; Clark, A.; Craddock, G.G.

    1985-04-01

    A computational technique is introduced which allows the student and researcher an opportunity to observe the physical behavior of a class of many-body systems. A series of examples is offered which illustrates the diversity of problems that may be studied using particle simulation. These simulations were in fact assigned as homework in a course on computational physics.

  8. Numerical simulation of sloshing with large deforming free surface by MPS-LES method

    NASA Astrophysics Data System (ADS)

    Pan, Xu-jie; Zhang, Huai-xin; Sun, Xue-yao

    2012-12-01

    Moving particle semi-implicit (MPS) method is a fully Lagrangian particle method which can easily solve problems with violent free surface. Although it has demonstrated its advantage in ocean engineering applications, it still has some defects to be improved. In this paper, MPS method is extended to the large eddy simulation (LES) by coupling with a sub-particle-scale (SPS) turbulence model. The SPS turbulence model turns into the Reynolds stress terms in the filtered momentum equation, and the Smagorinsky model is introduced to describe the Reynolds stress terms. Although MPS method has the advantage in the simulation of the free surface flow, a lot of non-free surface particles are treated as free surface particles in the original MPS model. In this paper, we use a new free surface tracing method and the key point is "neighbor particle". In this new method, the zone around each particle is divided into eight parts, and the particle will be treated as a free surface particle as long as there are no "neighbor particles" in any two parts of the zone. As the number density parameter judging method has a high efficiency for the free surface particles tracing, we combine it with the neighbor detected method. First, we select out the particles which may be mistreated with high probabilities by using the number density parameter judging method. And then we deal with these particles with the neighbor detected method. By doing this, the new mixed free surface tracing method can reduce the mistreatment problem efficiently. The serious pressure fluctuation is an obvious defect in MPS method, and therefore an area-time average technique is used in this paper to remove the pressure fluctuation with a quite good result. With these improvements, the modified MPS-LES method is applied to simulate liquid sloshing problems with large deforming free surface. Results show that the modified MPS-LES method can simulate the large deforming free surface easily. It can not only capture the large impact pressure accurately on rolling tank wall but also can generate all physical phenomena successfully. The good agreement between numerical and experimental results proves that the modified MPS-LES method is a good CFD methodology in free surface flow simulations.

  9. Analytical solutions for coagulation and condensation kinetics of composite particles

    NASA Astrophysics Data System (ADS)

    Piskunov, Vladimir N.

    2013-04-01

    The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.

  10. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  11. A generalized transport-velocity formulation for smoothed particle hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.

    The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable formore » fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.« less

  12. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  13. Development of stress boundary conditions in smoothed particle hydrodynamics (SPH) for the modeling of solids deformation

    NASA Astrophysics Data System (ADS)

    Douillet-Grellier, Thomas; Pramanik, Ranjan; Pan, Kai; Albaiz, Abdulaziz; Jones, Bruce D.; Williams, John R.

    2017-10-01

    This paper develops a method for imposing stress boundary conditions in smoothed particle hydrodynamics (SPH) with and without the need for dummy particles. SPH has been used for simulating phenomena in a number of fields, such as astrophysics and fluid mechanics. More recently, the method has gained traction as a technique for simulation of deformation and fracture in solids, where the meshless property of SPH can be leveraged to represent arbitrary crack paths. Despite this interest, application of boundary conditions within the SPH framework is typically limited to imposed velocity or displacement using fictitious dummy particles to compensate for the lack of particles beyond the boundary interface. While this is enough for a large variety of problems, especially in the case of fluid flow, for problems in solid mechanics there is a clear need to impose stresses upon boundaries. In addition to this, the use of dummy particles to impose a boundary condition is not always suitable or even feasibly, especially for those problems which include internal boundaries. In order to overcome these difficulties, this paper first presents an improved method for applying stress boundary conditions in SPH with dummy particles. This is then followed by a proposal of a formulation which does not require dummy particles. These techniques are then validated against analytical solutions to two common problems in rock mechanics, the Brazilian test and the penny-shaped crack problem both in 2D and 3D. This study highlights the fact that SPH offers a good level of accuracy to solve these problems and that results are reliable. This validation work serves as a foundation for addressing more complex problems involving plasticity and fracture propagation.

  14. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  15. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem

    PubMed Central

    2012-01-01

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640

  16. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem.

    PubMed

    Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio

    2012-02-16

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.

  17. Smoothed particle hydrodynamics method for evaporating multiphase flows.

    PubMed

    Yang, Xiufeng; Kong, Song-Charng

    2017-09-01

    The smoothed particle hydrodynamics (SPH) method has been increasingly used for simulating fluid flows; however, its ability to simulate evaporating flow requires significant improvements. This paper proposes an SPH method for evaporating multiphase flows. The present SPH method can simulate the heat and mass transfers across the liquid-gas interfaces. The conservation equations of mass, momentum, and energy were reformulated based on SPH, then were used to govern the fluid flow and heat transfer in both the liquid and gas phases. The continuity equation of the vapor species was employed to simulate the vapor mass fraction in the gas phase. The vapor mass fraction at the interface was predicted by the Clausius-Clapeyron correlation. An evaporation rate was derived to predict the mass transfer from the liquid phase to the gas phase at the interface. Because of the mass transfer across the liquid-gas interface, the mass of an SPH particle was allowed to change. Alternative particle splitting and merging techniques were developed to avoid large mass difference between SPH particles of the same phase. The proposed method was tested by simulating three problems, including the Stefan problem, evaporation of a static drop, and evaporation of a drop impacting a hot surface. For the Stefan problem, the SPH results of the evaporation rate at the interface agreed well with the analytical solution. For drop evaporation, the SPH result was compared with the result predicted by a level-set method from the literature. In the case of drop impact on a hot surface, the evolution of the shape of the drop, temperature, and vapor mass fraction were predicted.

  18. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  19. Possibilities of Particle Finite Element Methods in Industrial Forming Processes

    NASA Astrophysics Data System (ADS)

    Oliver, J.; Cante, J. C.; Weyler, R.; Hernandez, J.

    2007-04-01

    The work investigates the possibilities offered by the particle finite element method (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.

  20. Exploring Focal and Aberration Properties of Electrostatic Lenses through Computer Simulation

    ERIC Educational Resources Information Center

    Sise, Omer; Manura, David J.; Dogan, Mevlut

    2008-01-01

    The interactive nature of computer simulation allows students to develop a deeper understanding of the laws of charged particle optics. Here, the use of commercially available optical design programs is described as a tool to aid in solving charged particle optics problems. We describe simple and practical demonstrations of basic electrostatic…

  1. Coupling discrete and continuum concentration particle models for multiscale and hybrid molecular-continuum simulations

    DOE PAGES

    Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott

    2017-12-21

    Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less

  2. Coupling discrete and continuum concentration particle models for multiscale and hybrid molecular-continuum simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott

    Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less

  3. Transport dissipative particle dynamics model for mesoscopic advection-diffusion-reaction problems

    PubMed Central

    Yazdani, Alireza; Tartakovsky, Alexandre; Karniadakis, George Em

    2015-01-01

    We present a transport dissipative particle dynamics (tDPD) model for simulating mesoscopic problems involving advection-diffusion-reaction (ADR) processes, along with a methodology for implementation of the correct Dirichlet and Neumann boundary conditions in tDPD simulations. tDPD is an extension of the classic dissipative particle dynamics (DPD) framework with extra variables for describing the evolution of concentration fields. The transport of concentration is modeled by a Fickian flux and a random flux between tDPD particles, and the advection is implicitly considered by the movements of these Lagrangian particles. An analytical formula is proposed to relate the tDPD parameters to the effective diffusion coefficient. To validate the present tDPD model and the boundary conditions, we perform three tDPD simulations of one-dimensional diffusion with different boundary conditions, and the results show excellent agreement with the theoretical solutions. We also performed two-dimensional simulations of ADR systems and the tDPD simulations agree well with the results obtained by the spectral element method. Finally, we present an application of the tDPD model to the dynamic process of blood coagulation involving 25 reacting species in order to demonstrate the potential of tDPD in simulating biological dynamics at the mesoscale. We find that the tDPD solution of this comprehensive 25-species coagulation model is only twice as computationally expensive as the conventional DPD simulation of the hydrodynamics only, which is a significant advantage over available continuum solvers. PMID:26156459

  4. Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    PubMed Central

    Stover, Lori J.; Nair, Niketh S.; Faeder, James R.

    2014-01-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269

  5. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.

  6. Computer simulation of plasma and N-body problems

    NASA Technical Reports Server (NTRS)

    Harries, W. L.; Miller, J. B.

    1975-01-01

    The following FORTRAN language computer codes are presented: (1) efficient two- and three-dimensional central force potential solvers; (2) a three-dimensional simulator of an isolated galaxy which incorporates the potential solver; (3) a two-dimensional particle-in-cell simulator of the Jeans instability in an infinite self-gravitating compressible gas; and (4) a two-dimensional particle-in-cell simulator of a rotating self-gravitating compressible gaseous system of which rectangular coordinate and superior polar coordinate versions were written.

  7. Stochastic Set-Based Particle Swarm Optimization Based on Local Exploration for Solving the Carpool Service Problem.

    PubMed

    Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia

    2016-08-01

    The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.

  8. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  9. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  10. A stochastic vortex structure method for interacting particles in turbulent shear flows

    NASA Astrophysics Data System (ADS)

    Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.

    2018-01-01

    In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.

  11. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  12. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  13. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  14. Particle Acceleration in a Statistically Modeled Solar Active-Region Corona

    NASA Astrophysics Data System (ADS)

    Toutounzi, A.; Vlahos, L.; Isliker, H.; Dimitropoulou, M.; Anastasiadis, A.; Georgoulis, M.

    2013-09-01

    Elaborating a statistical approach to describe the spatiotemporally intermittent electric field structures formed inside a flaring solar active region, we investigate the efficiency of such structures in accelerating charged particles (electrons). The large-scale magnetic configuration in the solar atmosphere responds to the strong turbulent flows that convey perturbations across the active region by initiating avalanche-type processes. The resulting unstable structures correspond to small-scale dissipation regions hosting strong electric fields. Previous research on particle acceleration in strongly turbulent plasmas provides a general framework for addressing such a problem. This framework combines various electromagnetic field configurations obtained by magnetohydrodynamical (MHD) or cellular automata (CA) simulations, or by employing a statistical description of the field's strength and configuration with test particle simulations. Our objective is to complement previous work done on the subject. As in previous efforts, a set of three probability distribution functions describes our ad-hoc electromagnetic field configurations. In addition, we work on data-driven 3D magnetic field extrapolations. A collisional relativistic test-particle simulation traces each particle's guiding center within these configurations. We also find that an interplay between different electron populations (thermal/non-thermal, ambient/injected) in our simulations may also address, via a re-acceleration mechanism, the so called `number problem'. Using the simulated particle-energy distributions at different heights of the cylinder we test our results against observations, in the framework of the collisional thick target model (CTTM) of solar hard X-ray (HXR) emission. The above work is supported by the Hellenic National Space Weather Research Network (HNSWRN) via the THALIS Programme.

  15. Development of a fully implicit particle-in-cell scheme for gyrokinetic electromagnetic turbulence simulation in XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung-Hoe; Hager, R.; Chang, C. S.; Chacon, L.; Chen, G.; EPSI Team

    2016-10-01

    The cancelation problem has been a long-standing issue for long wavelengths modes in electromagnetic gyrokinetic PIC simulations in toroidal geometry. As an attempt of resolving this issue, we implemented a fully implicit time integration scheme in the full-f, gyrokinetic PIC code XGC1. The new scheme - based on the implicit Vlasov-Darwin PIC algorithm by G. Chen and L. Chacon - can potentially resolve cancelation problem. The time advance for the field and the particle equations is space-time-centered, with particle sub-cycling. The resulting system of equations is solved by a Picard iteration solver with fixed-point accelerator. The algorithm is implemented in the parallel velocity formalism instead of the canonical parallel momentum formalism. XGC1 specializes in simulating the tokamak edge plasma with magnetic separatrix geometry. A fully implicit scheme could be a way to accurate and efficient gyrokinetic simulations. We will test if this numerical scheme overcomes the cancelation problem, and reproduces the dispersion relation of Alfven waves and tearing modes in cylindrical geometry. Funded by US DOE FES and ASCR, and computing resources provided by OLCF through ALCC.

  16. Developing a new controllable lunar dust simulant: BHLD20

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Yi, Min; Shen, Zhigang; Zhang, Xiaojing; Ma, Shulin

    2017-07-01

    Identifying and eliminating the negative effects of lunar dust are of great importance for future lunar exploration. Since the available lunar samples are limited, developing terrestrial lunar dust simulant becomes critical for the study of lunar dust problem. In this work, beyond the three existing lunar dust simulants: JSC-1Avf, NU-LHT-1D, and CLDS-i, we developed a new high-fidelity lunar dust simulant named as BHLD20. And we concluded a methodology that soil and dust simulants can be produced by variations in portions of the overall procedure, whereby the properties of the products can be controlled by adjusting the feedstock preparation and heating process. The key ingredients of our innovative preparation route include: (1) plagioclase, used as a major material in preparing all kinds of lunar dust simulants; (2) a muffle furnace, applied to expediently enrich the glass phase in feedstock, with the production of some composite particles; (3) a one-step sand-milling technique, employed for mass pulverization without wasting feedstock; and (4) a particle dispersant, utilized to prevent the agglomeration in lunar dust simulant and retain the real particle size. Research activities in the development of BHLD20 can help solve the lunar dust problem.

  17. An Immersed Boundary-Lattice Boltzmann Method for Simulating Particulate Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Baili; Cheng, Ming; Lou, Jing

    2013-11-01

    A two-dimensional momentum exchange-based immersed boundary-lattice Boltzmann method developed by X.D. Niu et al. (2006) has been extended in three-dimensions for solving fluid-particles interaction problems. This method combines the most desirable features of the lattice Boltzmann method and the immersed boundary method by using a regular Eulerian mesh for the flow domain and a Lagrangian mesh for the moving particles in the flow field. The non-slip boundary conditions for the fluid and the particles are enforced by adding a force density term into the lattice Boltzmann equation, and the forcing term is simply calculated by the momentum exchange of the boundary particle density distribution functions, which are interpolated by the Lagrangian polynomials from the underlying Eulerian mesh. This method preserves the advantages of lattice Boltzmann method in tracking a group of particles and, at the same time, provides an alternative approach to treat solid-fluid boundary conditions. Numerical validations show that the present method is very accurate and efficient. The present method will be further developed to simulate more complex problems with particle deformation, particle-bubble and particle-droplet interactions.

  18. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  19. Interactive Particle Visualization

    NASA Astrophysics Data System (ADS)

    Gribble, Christiaan P.

    Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales. Effective visualizations of the resulting state will communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves. This chapter discusses two approaches to interactive particle visualization that satisfy these goals: one targeting desktop systems equipped with programmable graphics hardware, and the other targeting moderately sized multicore systems using packet-based ray tracing.

  20. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  1. Fictitious domain method for fully resolved reacting gas-solid flow simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Longhui; Liu, Kai; You, Changfu

    2015-10-01

    Fully resolved simulation (FRS) for gas-solid multiphase flow considers solid objects as finite sized regions in flow fields and their behaviours are predicted by solving equations in both fluid and solid regions directly. Fixed mesh numerical methods, such as fictitious domain method, are preferred in solving FRS problems and have been widely researched. However, for reacting gas-solid flows no suitable fictitious domain numerical method has been developed. This work presents a new fictitious domain finite element method for FRS of reacting particulate flows. Low Mach number reacting flow governing equations are solved sequentially on a regular background mesh. Particles are immersed in the mesh and driven by their surface forces and torques integrated on immersed interfaces. Additional treatments on energy and surface reactions are developed. Several numerical test cases validated the method and a burning carbon particles array falling simulation proved the capability for solving moving reacting particle cluster problems.

  2. On Characterizing Particle Shape

    NASA Technical Reports Server (NTRS)

    Ennis, Bryan J.; Rickman, Douglas; Rollins, A. Brent; Ennis, Brandon

    2014-01-01

    It is well known that particle shape affects flow characteristics of granular materials, as well as a variety of other solids processing issues such as compaction, rheology, filtration and other two-phase flow problems. The impact of shape crosses many diverse and commercially important applications, including pharmaceuticals, civil engineering, metallurgy, health, and food processing. Two applications studied here include the dry solids flow of lunar simulants (e.g. JSC-1, NU-LHT-2M, OB-1), and the flow properties of wet concrete, including final compressive strength. A multi-dimensional generalized, engineering method to quantitatively characterize particle shapes has been developed, applicable to both single particle orientation and multi-particle assemblies. The two-dimension, three dimension inversion problem is also treated, and the application of these methods to DEM model particles will be discussed. In the case of lunar simulants, flow properties of six lunar simulants have been measured, and the impact of particle shape on flowability - as characterized by the shape method developed here -- is discussed, especially in the context of three simulants of similar size range. In the context of concrete processing, concrete construction is a major contributor to greenhouse gas production, of which the major contributor is cement binding loading. Any optimization in concrete rheology and packing that can reduce cement loading and improve strength loading can also reduce currently required construction safety factors. The characterization approach here is also demonstrated for the impact of rock aggregate shape on concrete slump rheology and dry compressive strength.

  3. Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2000-01-01

    An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.

  4. Kinematic Model of Transient Shape-Induced Anisotropy in Dense Granular Flow

    NASA Astrophysics Data System (ADS)

    Nadler, B.; Guillard, F.; Einav, I.

    2018-05-01

    Nonspherical particles are ubiquitous in nature and industry, yet previous theoretical models of granular media are mostly limited to systems of spherical particles. The problem is that in systems of nonspherical anisotropic particles, dynamic particle alignment critically affects their mechanical response. To study the tendency of such particles to align, we propose a simple kinematic model that relates the flow to the evolution of particle alignment with respect to each other. The validity of the proposed model is supported by comparison with particle-based simulations for various particle shapes ranging from elongated rice-like (prolate) to flattened lentil-like (oblate) particles. The model shows good agreement with the simulations for both steady-state and transient responses, and advances the development of comprehensive constitutive models for shape-anisotropic particles.

  5. Global linear gyrokinetic particle-in-cell simulations including electromagnetic effects in shaped plasmas

    NASA Astrophysics Data System (ADS)

    Mishchenko, A.; Borchardt, M.; Cole, M.; Hatzky, R.; Fehér, T.; Kleiber, R.; Könies, A.; Zocco, A.

    2015-05-01

    We give an overview of recent developments in electromagnetic simulations based on the gyrokinetic particle-in-cell codes GYGLES and EUTERPE. We present the gyrokinetic electromagnetic models implemented in the codes and discuss further improvements of the numerical algorithm, in particular the so-called pullback mitigation of the cancellation problem. The improved algorithm is employed to simulate linear electromagnetic instabilities in shaped tokamak and stellarator plasmas, which was previously impossible for the parameters considered.

  6. Lagrangian particles with mixing. I. Simulating scalar transport

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2009-06-01

    The physical similarity and mathematical equivalence of continuous diffusion and particle random walk forms one of the cornerstones of modern physics and the theory of stochastic processes. The randomly walking particles do not need to posses any properties other than location in physical space. However, particles used in many models dealing with simulating turbulent transport and turbulent combustion do posses a set of scalar properties and mixing between particle properties is performed to reflect the dissipative nature of the diffusion processes. We show that the continuous scalar transport and diffusion can be accurately specified by means of localized mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. Particles with scalar properties and localized mixing represent an alternative formulation for the process, which is selected to represent the continuous diffusion. Simulating diffusion by Lagrangian particles with mixing involves three main competing requirements: minimizing stochastic uncertainty, minimizing bias introduced by numerical diffusion, and preserving independence of particles. These requirements are analyzed for two limited cases of mixing between two particles and mixing between a large number of particles. The problem of possible dependences between particles is most complicated. This problem is analyzed using a coupled chain of equations that has similarities with Bogolubov-Born-Green-Kirkwood-Yvon chain in statistical physics. Dependences between particles can be significant in close proximity of the particles resulting in a reduced rate of mixing. This work develops further ideas introduced in the previously published letter [Phys. Fluids 19, 031702 (2007)]. Paper I of this work is followed by Paper II [Phys. Fluids 19, 065102 (2009)] where modeling of turbulent reacting flows by Lagrangian particles with localized mixing is specifically considered.

  7. Numerical investigation of the dynamics of Janus magnetic particles in a rotating magnetic field

    NASA Astrophysics Data System (ADS)

    Kim, Hui Eun; Kim, Kyoungbeom; Ma, Tae Yeong; Kang, Tae Gon

    2017-02-01

    We investigated the rotational dynamics of Janus magnetic particles suspended in a viscous liquid, in the presence of an externally applied rotating magnetic field. A previously developed two-dimensional direct simulation method, based on the finite element method and a fictitious domain method, is employed to solve the magnetic particulate flow. As for the magnetic problem, the two Maxwell equations are converted to a differential equation using the magnetic potential. The magnetic forces acting on the particles are treated by a Maxwell stress tensor formulation, enabling us to consider the magnetic interactions among the particles without any approximation. The dynamics of a single particle in the rotating field is studied to elucidate the effect of the Mason number and the magnetic susceptibility on the particle motions. Then, we extended our interest to a two-particle problem, focusing on the effect of the initial configuration of the particles on the particle motions. In three-particle interaction problems, the particle dynamics and the fluid flow induced by the particle motions are significantly affected by the particle configuration and the orientation of each particle.

  8. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  9. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  10. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  11. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  12. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  13. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    PubMed

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  14. Extending self-organizing particle systems to problem solving.

    PubMed

    Rodríguez, Alejandro; Reggia, James A

    2004-01-01

    Self-organizing particle systems consist of numerous autonomous, purely reflexive agents ("particles") whose collective movements through space are determined primarily by local influences they exert upon one another. Inspired by biological phenomena (bird flocking, fish schooling, etc.), particle systems have been used not only for biological modeling, but also increasingly for applications requiring the simulation of collective movements such as computer-generated animation. In this research, we take some first steps in extending particle systems so that they not only move collectively, but also solve simple problems. This is done by giving the individual particles (agents) a rudimentary intelligence in the form of a very limited memory and a top-down, goal-directed control mechanism that, triggered by appropriate conditions, switches them between different behavioral states and thus different movement dynamics. Such enhanced particle systems are shown to be able to function effectively in performing simulated search-and-collect tasks. Further, computational experiments show that collectively moving agent teams are more effective than similar but independently moving ones in carrying out such tasks, and that agent teams of either type that split off members of the collective to protect previously acquired resources are most effective. This work shows that the reflexive agents of contemporary particle systems can readily be extended to support goal-directed problem solving while retaining their collective movement behaviors. These results may prove useful not only for future modeling of animal behavior, but also in computer animation, coordinated movement control in robotic teams, particle swarm optimization, and computer games.

  15. Evaluation of new collision-pair selection models in DSMC

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Hassan; Roohi, Ehsan

    2017-10-01

    The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.

  16. Understanding and simulating the material behavior during multi-particle irradiations

    PubMed Central

    Mir, Anamul H.; Toulemonde, M.; Jegou, C.; Miro, S.; Serruys, Y.; Bouffard, S.; Peuget, S.

    2016-01-01

    A number of studies have suggested that the irradiation behavior and damage processes occurring during sequential and simultaneous particle irradiations can significantly differ. Currently, there is no definite answer as to why and when such differences are seen. Additionally, the conventional multi-particle irradiation facilities cannot correctly reproduce the complex irradiation scenarios experienced in a number of environments like space and nuclear reactors. Therefore, a better understanding of multi-particle irradiation problems and possible alternatives are needed. This study shows ionization induced thermal spike and defect recovery during sequential and simultaneous ion irradiation of amorphous silica. The simultaneous irradiation scenario is shown to be equivalent to multiple small sequential irradiation scenarios containing latent damage formation and recovery mechanisms. The results highlight the absence of any new damage mechanism and time-space correlation between various damage events during simultaneous irradiation of amorphous silica. This offers a new and convenient way to simulate and understand complex multi-particle irradiation problems. PMID:27466040

  17. 3D Multispecies Nonlinear Perturbative Particle Simulation of Intense Nonneutral Particle Beams (Research supported by the Department of Energy and the Short Pulse Spallation Source Project and LANSCE Division of LANL.)

    NASA Astrophysics Data System (ADS)

    Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li

    1999-11-01

    The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.

  18. Direct Numerical Simulation of Fluid Flow and Mass Transfer in Particle Clusters

    PubMed Central

    2018-01-01

    In this paper, an efficient ghost-cell based immersed boundary method is applied to perform direct numerical simulation (DNS) of mass transfer problems in particle clusters. To be specific, a nine-sphere cuboid cluster and a random-generated spherical cluster consisting of 100 spheres are studied. In both cases, the cluster is composed of active catalysts and inert particles, and the mutual influence of particles on their mass transfer performance is studied. To simulate active catalysts the Dirichlet boundary condition is imposed at the external surface of spheres, while the zero-flux Neumann boundary condition is applied for inert particles. Through our studies, clustering is found to have negative influence on the mass transfer performance, which can be then improved by dilution with inert particles and higher Reynolds numbers. The distribution of active/inert particles may lead to large variations of the cluster mass transfer performance, and individual particle deep inside the cluster may possess a high Sherwood number. PMID:29657359

  19. Effects of Initial Particle Distribution on an Energetic Dispersal of Particles

    NASA Astrophysics Data System (ADS)

    Rollin, Bertrand; Ouellet, Frederick; Koneru, Rahul; Garno, Joshua; Durant, Bradford

    2017-11-01

    Accurate predictions of the late time solid particle cloud distribution ensuing an explosive dispersal of particles is an extremely challenging problem for compressible multiphase flow simulations. The source of this difficulty is twofold: (i) The complex sequence of events taking place. Indeed, as the blast wave crosses the surrounding layer of particles, compaction occurs shortly before particles disperse radially at high speed. Then, during the dispersion phase, complex multiphase interactions occurs between particles and detonation products. (ii) Precise characterization of the explosive and particle distribution is virtually impossible. In this numerical experiment, we focus on the sensitivity of late time particle cloud distributions relative to carefully designed initial distributions, assuming the explosive is well described. Using point particle simulations, we study the case of a bed of glass particles surrounding an explosive. Constraining our simulations to relatively low initial volume fractions to prevent reaching of the close packing limit, we seek to describe qualitatively and quantitatively the late time dependency of a solid particle cloud on its distribution before the energy release of an explosive. This work was supported by the U.S. DoE, NNSA, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  20. Multigrid contact detection method

    NASA Astrophysics Data System (ADS)

    He, Kejing; Dong, Shoubin; Zhou, Zhaoyao

    2007-03-01

    Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.

  1. LANL LDRD-funded project: Test particle simulations of energetic ions in natural and artificial radiation belts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowee, Misa; Liu, Kaijun; Friedel, Reinhard H.

    2012-07-17

    We summarize the scientific problem and work plan for the LANL LDRD-funded project to use a test particle code to study the sudden de-trapping of inner belt protons and possible cross-L transport of debris ions after a high altitude nuclear explosion (HANE). We also discuss future application of the code for other HANE-related problems.

  2. Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors

    NASA Astrophysics Data System (ADS)

    Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.

    2016-05-01

    This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    James K. Neathery; Gary Jacobs; Burtron H. Davis

    In this reporting period, a fundamental filtration study was started to investigate the separation of Fischer-Tropsch Synthesis (FTS) liquids from iron-based catalyst particles. Slurry-phase FTS in slurry bubble column reactor systems is the preferred mode of production since the reaction is highly exothermic. Consequently, heavy wax products must be separated from catalyst particles before being removed from the reactor system. Achieving an efficient wax product separation from iron-based catalysts is one of the most challenging technical problems associated with slurry-phase FTS. The separation problem is further compounded by catalyst particle attrition and the formation of ultra-fine iron carbide and/or carbonmore » particles. Existing pilot-scale equipment was modified to include a filtration test apparatus. After undergoing an extensive plant shakedown period, filtration tests with cross-flow filter modules using simulant FTS wax slurry were conducted. The focus of these early tests was to find adequate mixtures of polyethylene wax to simulate FTS wax. Catalyst particle size analysis techniques were also developed. Initial analyses of the slurry and filter permeate particles will be used by the research team to design improved filter media and cleaning strategies.« less

  4. A Large-Particle Monte Carlo Code for Simulating Non-Linear High-Energy Processes Near Compact Objects

    NASA Technical Reports Server (NTRS)

    Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek

    1995-01-01

    High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.

  5. Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM

    NASA Astrophysics Data System (ADS)

    de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl

    2002-03-01

    We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.

  6. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  7. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  8. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less

  9. Numerical simulation of a flow-like landslide using the particle finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Krabbenhoft, Kristian; Sheng, Daichao; Li, Weichao

    2015-01-01

    In this paper, an actual landslide process that occurred in Southern China is simulated by a continuum approach, the particle finite element method (PFEM). The PFEM attempts to solve the boundary-value problems in the framework of solid mechanics, satisfying the governing equations including momentum conservation, displacement-strain relation, constitutive relation as well as the frictional contact between the sliding mass and the slip surface. To warrant the convergence behaviour of solutions, the problem is formulated as a mathematical programming problem, while the particle finite element procedure is employed to tackle the issues of mesh distortion and free-surface evolution. The whole procedure of the landslide, from initiation, sliding to deposition, is successfully reproduced by the continuum approach. It is shown that the density of the mass has little influence on the sliding process in the current landslide, whereas both the geometry and the roughness of the slip surface play important roles. Comparative studies are also conducted where a satisfactory agreement is obtained.

  10. An Investigation into Solution Verification for CFD-DEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fullmer, William D.; Musser, Jordan

    This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less

  11. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  12. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  13. 2D Implosion Simulations with a Kinetic Particle Code

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Even, Wesley; Strother, Terrance

    2017-10-01

    Many problems in laboratory and plasma physics are subject to flows that move between the continuum and the kinetic regime. We discuss two-dimensional (2D) implosion simulations that were performed using a Monte Carlo kinetic particle code. The application of kinetic transport theory is motivated, in part, by the occurrence of non-equilibrium effects in inertial confinement fusion (ICF) capsule implosions, which cannot be fully captured by hydrodynamics simulations. Kinetic methods, on the other hand, are able to describe both, continuum and rarefied flows. We perform simple 2D disk implosion simulations using one particle species and compare the results to simulations with the hydrodynamics code RAGE. The impact of the particle mean-free-path on the implosion is also explored. In a second study, we focus on the formation of fluid instabilities from induced perturbations. I.S. acknowledges support through the Director's fellowship from Los Alamos National Laboratory. This research used resources provided by the LANL Institutional Computing Program.

  14. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models

    NASA Astrophysics Data System (ADS)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain

    2018-03-01

    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  15. Dark matter self-interactions and small scale structure

    NASA Astrophysics Data System (ADS)

    Tulin, Sean; Yu, Hai-Bo

    2018-02-01

    We review theories of dark matter (DM) beyond the collisionless paradigm, known as self-interacting dark matter (SIDM), and their observable implications for astrophysical structure in the Universe. Self-interactions are motivated, in part, due to the potential to explain long-standing (and more recent) small scale structure observations that are in tension with collisionless cold DM (CDM) predictions. Simple particle physics models for SIDM can provide a universal explanation for these observations across a wide range of mass scales spanning dwarf galaxies, low and high surface brightness spiral galaxies, and clusters of galaxies. At the same time, SIDM leaves intact the success of ΛCDM cosmology on large scales. This report covers the following topics: (1) small scale structure issues, including the core-cusp problem, the diversity problem for rotation curves, the missing satellites problem, and the too-big-to-fail problem, as well as recent progress in hydrodynamical simulations of galaxy formation; (2) N-body simulations for SIDM, including implications for density profiles, halo shapes, substructure, and the interplay between baryons and self-interactions; (3) semi-analytic Jeans-based methods that provide a complementary approach for connecting particle models with observations; (4) merging systems, such as cluster mergers (e.g., the Bullet Cluster) and minor infalls, along with recent simulation results for mergers; (5) particle physics models, including light mediator models and composite DM models; and (6) complementary probes for SIDM, including indirect and direct detection experiments, particle collider searches, and cosmological observations. We provide a summary and critical look for all current constraints on DM self-interactions and an outline for future directions.

  16. The scaling of relativistic double-year widths - Poisson-Vlasov solutions and particle-in-cell simulations

    NASA Technical Reports Server (NTRS)

    Sulkanen, Martin E.; Borovsky, Joseph E.

    1992-01-01

    The study of relativistic plasma double layers is described through the solution of the one-dimensional, unmagnetized, steady-state Poisson-Vlasov equations and by means of one-dimensional, unmagnetized, particle-in-cell simulations. The thickness vs potential-drop scaling law is extended to relativistic potential drops and relativistic plasma temperatures. The transition in the scaling law for 'strong' double layers suggested by analytical two-beam models by Carlqvist (1982) is confirmed, and causality problems of standard double-layer simulation techniques applied to relativistic plasma systems are discussed.

  17. Transport dissipative particle dynamics model for mesoscopic advection- diffusion-reaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Li; Yazdani, Alireza; Tartakovsky, Alexandre M.

    2015-07-07

    We present a transport dissipative particle dynamics (tDPD) model for simulating mesoscopic problems involving advection-diffusion-reaction (ADR) processes, along with a methodology for implementation of the correct Dirichlet and Neumann boundary conditions in tDPD simulations. tDPD is an extension of the classic DPD framework with extra variables for describing the evolution of concentration fields. The transport of concentration is modeled by a Fickian flux and a random flux between particles, and an analytical formula is proposed to relate the mesoscopic concentration friction to the effective diffusion coefficient. To validate the present tDPD model and the boundary conditions, we perform three tDPDmore » simulations of one-dimensional diffusion with different boundary conditions, and the results show excellent agreement with the theoretical solutions. We also performed two-dimensional simulations of ADR systems and the tDPD simulations agree well with the results obtained by the spectral element method. Finally, we present an application of the tDPD model to the dynamic process of blood coagulation involving 25 reacting species in order to demonstrate the potential of tDPD in simulating biological dynamics at the mesoscale. We find that the tDPD solution of this comprehensive 25-species coagulation model is only twice as computationally expensive as the DPD simulation of the hydrodynamics only, which is a significant advantage over available continuum solvers.« less

  18. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  19. Explicit simulation of ice particle habits in a Numerical Weather Prediction Model

    NASA Astrophysics Data System (ADS)

    Hashino, Tempei

    2007-05-01

    This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.

  20. Simulation of erosion by a particulate airflow through a ventilator

    NASA Astrophysics Data System (ADS)

    Ghenaiet, A.

    2015-08-01

    Particulate flows are a serious problem in air ventilation systems, leading to erosion of rotor blades and aerodynamic performance degradation. This paper presents the numerical results of sand particle trajectories and erosion patterns in an axial ventilator and the subsequent blade deterioration. The flow field was solved separately by using the code CFX- TASCflow. The Lagrangian approach for the solid particles tracking implemented in our inhouse code considers particle and eddy interaction, particle size distribution, particle rebounds and near walls effects. The assessment of erosion wear is based on the impact frequency and local values of erosion rate. Particle trajectories and erosion simulation revealed distinctive zones of impacts with high rates of erosion mainly on the blade pressure side, whereas the suction side is eroded around the leading edge.

  1. A deterministic Lagrangian particle separation-based method for advective-diffusion problems

    NASA Astrophysics Data System (ADS)

    Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.

    2008-12-01

    A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.

  2. Statistical benchmark for BosonSampling

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  3. End-to-end plasma bubble PIC simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava

    2017-10-01

    Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.

  4. Spatio-Temporal Process Simulation of Dam-Break Flood Based on SPH

    NASA Astrophysics Data System (ADS)

    Wang, H.; Ye, F.; Ouyang, S.; Li, Z.

    2018-04-01

    On the basis of introducing the SPH (Smooth Particle Hydrodynamics) simulation method, the key research problems were given solutions in this paper, which ere the spatial scale and temporal scale adapting to the GIS(Geographical Information System) application, the boundary condition equations combined with the underlying surface, and the kernel function and parameters applicable to dam-break flood simulation. In this regards, a calculation method of spatio-temporal process emulation with elaborate particles for dam-break flood was proposed. Moreover the spatio-temporal process was dynamic simulated by using GIS modelling and visualization. The results show that the method gets more information, objectiveness and real situations.

  5. Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2016-11-01

    Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.

  6. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  7. Lineage mapper: A versatile cell and particle tracker

    NASA Astrophysics Data System (ADS)

    Chalfoun, Joe; Majurski, Michael; Dima, Alden; Halter, Michael; Bhadriraju, Kiran; Brady, Mary

    2016-11-01

    The ability to accurately track cells and particles from images is critical to many biomedical problems. To address this, we developed Lineage Mapper, an open-source tracker for time-lapse images of biological cells, colonies, and particles. Lineage Mapper tracks objects independently of the segmentation method, detects mitosis in confluence, separates cell clumps mistakenly segmented as a single cell, provides accuracy and scalability even on terabyte-sized datasets, and creates division and/or fusion lineages. Lineage Mapper has been tested and validated on multiple biological and simulated problems. The software is available in ImageJ and Matlab at isg.nist.gov.

  8. A simulator for discrete quantum walks on lattices

    NASA Astrophysics Data System (ADS)

    Rodrigues, J.; Paunković, N.; Mateus, P.

    In this paper, we present a simulator for two-particle quantum walks on the line and one-particle on a two-dimensional squared lattice. It can be used to investigate the equivalence between the two cases (one- and two-particle walks) for various boundary conditions (open, circular, reflecting, absorbing and their combinations). For the case of a single walker on a two-dimensional lattice, the simulator can also implement the Möbius strip. Furthermore, other topologies for the walker are also simulated by the proposed tool, like certain types of planar graphs with degree up to 4, by considering missing links over the lattice. The main purpose of the simulator is to study the genuinely quantum effects on the global properties of the two-particle joint probability distribution on the entanglement between the walkers/axis. For that purpose, the simulator is designed to compute various quantities such as: the entanglement and classical correlations, (classical and quantum) mutual information, the average distance between the two walkers, different hitting times and quantum discord. These quantities are of vital importance in designing possible algorithmic applications of quantum walks, namely in search, 3-SAT problems, etc. The simulator can also implement the static partial measurements of particle(s) positions and dynamic breaking of the links between certain nodes, both of which can be used to investigate the effects of decoherence on the walker(s). Finally, the simulator can be used to investigate the dynamic Anderson-like particle localization by varying the coin operators of certain nodes on the line/lattice. We also present some illustrative and relevant examples of one- and two-particle quantum walks in various scenarios. The tool was implemented in C and is available on-line at http://qwsim.weebly.com/.

  9. The problems of cosmic ray particle simulation for the near-Earth orbital and interplanetary flight conditions.

    PubMed

    Nymmik, R A

    1999-10-01

    A wide range of the galactic cosmic ray and SEP event flux simulation problems for the near-Earth satellite and manned spacecraft orbits and for the interplanetary mission trajectories are discussed. The models of the galactic cosmic ray and SEP events in the Earth orbit beyond the Earth's magnetosphere are used as a basis. The particle fluxes in the near-Earth orbits should be calculated using the transmission functions. To calculate the functions, the dependences of the cutoff rigidities on the magnetic disturbance level and on magnetic local time have to be known. In the case of space flights towards the Sun and to the boundary of the solar system, particular attention is paid to the changes in the SEP event occurrence frequency and size. The particle flux gradients are applied in this case to galactic cosmic ray fluxes.

  10. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  11. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  12. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  13. Particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Vranic, M.; Grismayer, T.; Martins, J. L.; Fonseca, R. A.; Silva, L. O.

    2015-06-01

    Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles ought to increase exponentially in the simulation box, the dynamical merging permits a considerable speedup, and significant memory savings that otherwise would make the simulations impossible to perform.

  14. Animating Wall-Bounded Turbulent Smoke via Filament-Mesh Particle-Particle Method.

    PubMed

    Liao, Xiangyun; Si, Weixin; Yuan, Zhiyong; Sun, Hanqiu; Qin, Jing; Wang, Qiong; Heng, Pheng-Ann; Xiangyun Liao; Weixin Si; Zhiyong Yuan; Hanqiu Sun; Jing Qin; Qiong Wang; Pheng-Ann Heng

    2018-03-01

    Turbulent vortices in smoke flows are crucial for a visually interesting appearance. Unfortunately, it is challenging to efficiently simulate these appealing effects in the framework of vortex filament methods. The vortex filaments in grids scheme allows to efficiently generate turbulent smoke with macroscopic vortical structures, but suffers from the projection-related dissipation, and thus the small-scale vortical structures under grid resolution are hard to capture. In addition, this scheme cannot be applied in wall-bounded turbulent smoke simulation, which requires efficiently handling smoke-obstacle interaction and creating vorticity at the obstacle boundary. To tackle above issues, we propose an effective filament-mesh particle-particle (FMPP) method for fast wall-bounded turbulent smoke simulation with ample details. The Filament-Mesh component approximates the smooth long-range interactions by splatting vortex filaments on grid, solving the Poisson problem with a fast solver, and then interpolating back to smoke particles. The Particle-Particle component introduces smoothed particle hydrodynamics (SPH) turbulence model for particles in the same grid, where interactions between particles cannot be properly captured under grid resolution. Then, we sample the surface of obstacles with boundary particles, allowing the interaction between smoke and obstacle being treated as pressure forces in SPH. Besides, the vortex formation region is defined at the back of obstacles, providing smoke particles flowing by the separation particles with a vorticity force to simulate the subsequent vortex shedding phenomenon. The proposed approach can synthesize the lost small-scale vortical structures and also achieve the smoke-obstacle interaction with vortex shedding at obstacle boundaries in a lightweight manner. The experimental results demonstrate that our FMPP method can achieve more appealing visual effects than vortex filaments in grids scheme by efficiently simulating more vivid thin turbulent features.

  15. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  16. Numerical simulation support to the ESA/THOR mission

    NASA Astrophysics Data System (ADS)

    Valentini, F.; Servidio, S.; Perri, S.; Perrone, D.; De Marco, R.; Marcucci, M. F.; Daniele, B.; Bruno, R.; Camporeale, E.

    2016-12-01

    THOR is a spacecraft concept currently undergoing study phase as acandidate for the next ESA medium size mission M4. THOR has been designedto solve the longstanding physical problems of particle heating andenergization in turbulent plasmas. It will provide high resolutionmeasurements of electromagnetic fields and particle distribution functionswith unprecedented resolution, with the aim of exploring the so-calledkinetic scales. We present the numerical simulation framework which is supporting the THOR mission during the study phase. The THOR teamincludes many scientists developing and running different simulation codes(Eulerian-Vlasov, Particle-In-Cell, Gyrokinetics, Two-fluid, MHD, etc.),addressing the physics of plasma turbulence, shocks, magnetic reconnectionand so on.These numerical codes are being used during the study phase, mainly withthe aim of addressing the following points:(i) to simulate the response of real particle instruments on board THOR, byemploying an electrostatic analyser simulator which mimics the response ofthe CSW, IMS and TEA instruments to the particle velocity distributions ofprotons, alpha particle and electrons, as obtained from kinetic numericalsimulations of plasma turbulence.(ii) to compare multi-spacecraft with single-spacecraft configurations inmeasuring current density, by making use of both numerical models ofsynthetic turbulence and real data from MMS spacecraft.(iii) to investigate the validity of the Taylor hypothesis indifferent configurations of plasma turbulence

  17. An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.

    1997-06-01

    A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less

  18. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  19. Simulation of Powder Layer Deposition in Additive Manufacturing Processes Using the Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbold, E. B.; Walton, O.; Homel, M. A.

    2015-10-26

    This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-­L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-­bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-­4 weeks of an FTE split amongst two staff scientists and one post-­doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-­particles square by 10-­particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less

  20. Electromagnetic gyrokinetic simulation in GTS

    NASA Astrophysics Data System (ADS)

    Ma, Chenhao; Wang, Weixing; Startsev, Edward; Lee, W. W.; Ethier, Stephane

    2017-10-01

    We report the recent development in the electromagnetic simulations for general toroidal geometry based on the particle-in-cell gyrokinetic code GTS. Because of the cancellation problem, the EM gyrokinetic simulation has numerical difficulties in the MHD limit where k⊥ρi -> 0 and/or β >me /mi . Recently several approaches has been developed to circumvent this problem: (1) p∥ formulation with analytical skin term iteratively approximated by simulation particles (Yang Chen), (2) A modified p∥ formulation with ∫ dtE∥ used in place of A∥ (Mishichenko); (3) A conservative theme where the electron density perturbation for the Poisson equation is calculated from an electron continuity equation (Bao) ; (4) double-split-weight scheme with two weights, one for Poisson equation and one for time derivative of Ampere's law, each with different splits designed to remove large terms from Vlasov equation (Startsev). These algorithms are being implemented into GTS framework for general toroidal geometry. The performance of these different algorithms will be compared for various EM modes.

  1. Fate and Transport of Nanoparticles in Porous Media: A Numerical Study

    NASA Astrophysics Data System (ADS)

    Taghavy, Amir

    Understanding the transport characteristics of NPs in natural soil systems is essential to revealing their potential impact on the food chain and groundwater. In addition, many nanotechnology-based remedial measures require effective transport of NPs through soil, which necessitates accurate understanding of their transport and retention behavior. Based upon the conceptual knowledge of environmental behavior of NPs, mathematical models can be developed to represent the coupling of processes that govern the fate of NPs in subsurface, serving as effective tools for risk assessment and/or design of remedial strategies. This work presents an innovative hybrid Eulerian-Lagrangian modeling technique for simulating the simultaneous reactive transport of nanoparticles (NPs) and dissolved constituents in porous media. Governing mechanisms considered in the conceptual model include particle-soil grain, particle-particle, particle-dissolved constituents, and particle- oil/water interface interactions. The main advantage of this technique, compared to conventional Eulerian models, lies in its ability to address non-uniformity in physicochemical particle characteristics. The developed numerical simulator was applied to investigate the fate and transport of NPs in a number of practical problems relevant to the subsurface environment. These problems included: (1) reductive dechlorination of chlorinated solvents by zero-valent iron nanoparticles (nZVI) in dense non-aqueous phase liquid (DNAPL) source zones; (2) reactive transport of dissolving silver nanoparticles (nAg) and the dissolved silver ions; (3) particle-particle interactions and their effects on the particle-soil grain interactions; and (4) influence of particle-oil/water interface interactions on NP transport in porous media.

  2. Particle acceleration in solar active regions being in the state of self-organized criticality.

    NASA Astrophysics Data System (ADS)

    Vlahos, Loukas

    We review the recent observational results on flare initiation and particle acceleration in solar active regions. Elaborating a statistical approach to describe the spatiotemporally intermittent electric field structures formed inside a flaring solar active region, we investigate the efficiency of such structures in accelerating charged particles (electrons and protons). The large-scale magnetic configuration in the solar atmosphere responds to the strong turbulent flows that convey perturbations across the active region by initiating avalanche-type processes. The resulting unstable structures correspond to small-scale dissipation regions hosting strong electric fields. Previous research on particle acceleration in strongly turbulent plasmas provides a general framework for addressing such a problem. This framework combines various electromagnetic field configurations obtained by magnetohydrodynamical (MHD) or cellular automata (CA) simulations, or by employing a statistical description of the field’s strength and configuration with test particle simulations. We work on data-driven 3D magnetic field extrapolations, based on a self-organized criticality models (SOC). A relativistic test-particle simulation traces each particle’s guiding center within these configurations. Using the simulated particle-energy distributions we test our results against observations, in the framework of the collisional thick target model (CTTM) of solar hard X-ray (HXR) emission and compare our results with the current observations.

  3. Optical depth in particle-laden turbulent flows

    NASA Astrophysics Data System (ADS)

    Frankel, A.; Iaccarino, G.; Mani, A.

    2017-11-01

    Turbulent clustering of particles causes an increase in the radiation transmission through gas-particle mixtures. Attempts to capture the ensemble-averaged transmission lead to a closure problem called the turbulence-radiation interaction. A simple closure model based on the particle radial distribution function is proposed to capture the effect of turbulent fluctuations in the concentration on radiation intensity. The model is validated against a set of particle-resolved ray tracing experiments through particle fields from direct numerical simulations of particle-laden turbulence. The form of the closure model is generalizable to arbitrary stochastic media with known two-point correlation functions.

  4. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  5. Numerical simulation of the hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor

    NASA Astrophysics Data System (ADS)

    Fortova, S. V.; Shepelev, V. V.; Troshkin, O. V.; Kozlov, S. A.

    2017-09-01

    The paper presents the results of numerical simulation of the development of hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor encountered in experiments [1-3]. For the numerical solution used the TPS software package (Turbulence Problem Solver) that implements a generalized approach to constructing computer programs for a wide range of problems of hydrodynamics, described by the system of equations of hyperbolic type. As numerical methods are used the method of large particles and ENO-scheme of the second order with Roe solver for the approximate solution of the Riemann problem.

  6. Numerical simulation of the solitary wave interacting with an elastic structure using MPS-FEM coupled method

    NASA Astrophysics Data System (ADS)

    Rao, Chengping; Zhang, Youlin; Wan, Decheng

    2017-12-01

    Fluid-Structure Interaction (FSI) caused by fluid impacting onto a flexible structure commonly occurs in naval architecture and ocean engineering. Research on the problem of wave-structure interaction is important to ensure the safety of offshore structures. This paper presents the Moving Particle Semi-implicit and Finite Element Coupled Method (MPS-FEM) to simulate FSI problems. The Moving Particle Semi-implicit (MPS) method is used to calculate the fluid domain, while the Finite Element Method (FEM) is used to address the structure domain. The scheme for the coupling of MPS and FEM is introduced first. Then, numerical validation and convergent study are performed to verify the accuracy of the solver for solitary wave generation and FSI problems. The interaction between the solitary wave and an elastic structure is investigated by using the MPS-FEM coupled method.

  7. A particle-particle hybrid method for kinetic and continuum equations

    NASA Astrophysics Data System (ADS)

    Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen

    2009-10-01

    We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.

  8. Spatially Localized Particle Energization by Landau Damping in Current Sheets

    NASA Astrophysics Data System (ADS)

    Howes, G. G.; Klein, K. G.; McCubbin, A. J.

    2017-12-01

    Understanding the mechanisms of particle energization through the removal of energy from turbulent fluctuations in heliospheric plasmas is a grand challenge problem in heliophysics. Under the weakly collisional conditions typical of heliospheric plasma, kinetic mechanisms must be responsible for this energization, but the nature of those mechanisms remains elusive. In recent years, the spatial localization of plasma heating near current sheets in the solar wind and numerical simulations has gained much attention. Here we show, using the innovative and new field-particle correlation technique, that the spatially localized particle energization occurring in a nonlinear gyrokinetic simulation has the velocity space signature of Landau damping, suggesting that this well-known collisionless damping mechanism indeed actively leads to spatially localized heating in the vicinity of current sheets.

  9. Dynamic load balance scheme for the DSMC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jin; Geng, Xiangren; Jiang, Dingwu

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less

  10. Full Eulerian simulations of biconcave neo-Hookean particles in a Poiseuille flow

    NASA Astrophysics Data System (ADS)

    Sugiyama, Kazuyasu; , Satoshi, II; Takeuchi, Shintaro; Takagi, Shu; Matsumoto, Yoichiro

    2010-03-01

    For a given initial configuration of a multi-component geometry represented by voxel-based data on a fixed Cartesian mesh, a full Eulerian finite difference method facilitates solution of dynamic interaction problems between Newtonian fluid and hyperelastic material. The solid volume fraction, and the left Cauchy-Green deformation tensor are temporally updated on the Eulerian frame, respectively, to distinguish the fluid and solid phases, and to describe the solid deformation. The simulation method is applied to two- and three-dimensional motions of two biconcave neo-Hookean particles in a Poiseuille flow. Similar to the numerical study on the red blood cell motion in a circular pipe (Gong et al. in J Biomech Eng 131:074504, 2009), in which Skalak’s constitutive laws of the membrane are considered, the deformation, the relative position and orientation of a pair of particles are strongly dependent upon the initial configuration. The increase in the apparent viscosity is dependent upon the developed arrangement of the particles. The present Eulerian approach is demonstrated that it has the potential to be easily extended to larger system problems involving a large number of particles of complicated geometries.

  11. An interdiffusional model for prediction of the interaction layer growth in the system uranium molybdenum/aluminum

    NASA Astrophysics Data System (ADS)

    Soba, A.; Denis, A.

    2007-03-01

    The codes PLACA and DPLACA, elaborated in this working group, simulate the behavior of a plate-type fuel containing in its core a foil of monolithic or dispersed fissile material, respectively, under normal operation conditions of a research reactor. Dispersion fuels usually consist of ceramic particles of a uranium compound in a high thermal conductivity matrix. The use of particles of a U-Mo alloy in a matrix of Al requires especially devoted subroutines able to simulate the growth of the interaction layer that develops between the particles and the matrix. A model is presented in this work that gives account of these particular phenomena. It is based on the assumption that diffusion of U and Al through the layer is the rate-determining step. Two moving interfaces separate the growing reaction layer from the original phases. The kinetics of these boundaries are solved as Stefan problems. In order to test the model and the associated code, some previous, simpler problems corresponding to similar systems for which analytical solutions or experimental data are known were simulated. Experiments performed with planar U-Mo/Al diffusion couples are reported in the literature, which purpose is to obtain information on the system parameters. These experiments were simulated with PLACA. Results of experiments performed with U-Mo particles disperse in Al either without or with irradiation, published in the open literature were simulated with DPLACA. A satisfactory prediction of the whole reaction layer thickness and of the individual fractions corresponding to alloy and matrix consumption was obtained.

  12. Brownian motion of massive black hole binaries and the final parsec problem

    NASA Astrophysics Data System (ADS)

    Bortolas, E.; Gualandris, A.; Dotti, M.; Spera, M.; Mapelli, M.

    2016-09-01

    Massive black hole binaries (BHBs) are expected to be one of the most powerful sources of gravitational waves in the frequency range of the pulsar timing array and of forthcoming space-borne detectors. They are believed to form in the final stages of galaxy mergers, and then harden by slingshot ejections of passing stars. However, evolution via the slingshot mechanism may be ineffective if the reservoir of interacting stars is not readily replenished, and the binary shrinking may come to a halt at roughly a parsec separation. Recent simulations suggest that the departure from spherical symmetry, naturally produced in merger remnants, leads to efficient loss cone refilling, preventing the binary from stalling. However, current N-body simulations able to accurately follow the evolution of BHBs are limited to very modest particle numbers. Brownian motion may artificially enhance the loss cone refilling rate in low-N simulations, where the binary encounters a larger population of stars due its random motion. Here we study the significance of Brownian motion of BHBs in merger remnants in the context of the final parsec problem. We simulate mergers with various particle numbers (from 8k to 1M) and with several density profiles. Moreover, we compare simulations where the BHB is fixed at the centre of the merger remnant with simulations where the BHB is free to random walk. We find that Brownian motion does not significantly affect the evolution of BHBs in simulations with particle numbers in excess of one million, and that the hardening measured in merger simulations is due to collisionless loss cone refilling.

  13. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  14. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  15. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less

  16. Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.

    PubMed

    Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D

    2017-02-01

    We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.

  17. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  18. Simulating propagation of coherent light in random media using the Fredholm type integral equation

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2017-06-01

    Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.

  19. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  20. Investigation of unsteadiness in Shock-particle cloud interaction: Fully resolved two-dimensional simulation and one-dimensional modeling

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.

    2015-11-01

    Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.

  1. On improving the algorithm efficiency in the particle-particle force calculations

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2016-09-01

    The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).

  2. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  3. Modeling of magnetic hystereses in soft MREs filled with NdFeB particles

    NASA Astrophysics Data System (ADS)

    Kalina, K. A.; Brummund, J.; Metsch, P.; Kästner, M.; Borin, D. Yu; Linke, J. M.; Odenbach, S.

    2017-10-01

    Herein, we investigate the structure-property relationships of soft magnetorheological elastomers (MREs) filled with remanently magnetizable particles. The study is motivated from experimental results which indicate a large difference between the magnetization loops of soft MREs filled with NdFeB particles and the loops of such particles embedded in a comparatively stiff matrix, e.g. an epoxy resin. We present a microscale model for MREs based on a general continuum formulation of the magnetomechanical boundary value problem which is valid for finite strains. In particular, we develop an energetically consistent constitutive model for the hysteretic magnetization behavior of the magnetically hard particles. The microstructure is discretized and the problem is solved numerically in terms of a coupled nonlinear finite element approach. Since the local magnetic and mechanical fields are resolved explicitly inside the heterogeneous microstructure of the MRE, our model also accounts for interactions of particles close to each other. In order to connect the microscopic fields to effective macroscopic quantities of the MRE, a suitable computational homogenization scheme is used. Based on this modeling approach, it is demonstrated that the observable macroscopic behavior of the considered MREs results from the rotation of the embedded particles. Furthermore, the performed numerical simulations indicate that the reversion of the sample’s magnetization occurs due to a combination of particle rotations and internal domain conversion processes. All of our simulation results obtained for such materials are in a good qualitative agreement with the experiments.

  4. Stochastic simulation of soil particle-size curves in heterogeneous aquifer systems through a Bayes space approach

    NASA Astrophysics Data System (ADS)

    Menafoglio, A.; Guadagnini, A.; Secchi, P.

    2016-08-01

    We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.

  5. Speed-limited particle-in-cell (SLPIC) simulation

    NASA Astrophysics Data System (ADS)

    Werner, Gregory; Cary, John; Jenkins, Thomas

    2016-10-01

    Speed-limited particle-in-cell (SLPIC) simulation is a new method for particle-based plasma simulation that allows increased timesteps in cases where the timestep is determined (e.g., in standard PIC) not by the smallest timescale of interest, but rather by an even smaller physical timescale that affects numerical stability. For example, SLPIC need not resolve the plasma frequency if plasma oscillations do not play a significant role in the simulation; in contrast, standard PIC must usually resolve the plasma frequency to avoid instability. Unlike fluid approaches, SLPIC retains a fully-kinetic description of plasma particles and includes all the same physical phenomena as PIC; in fact, if SLPIC is run with a PIC-compatible timestep, it is identical to PIC. However, unlike PIC, SLPIC can run stably with larger timesteps. SLPIC has been shown to be effective for finding steady-state solutions for 1D collisionless sheath problems, greatly speeding up computation despite a large ion/electron mass ratio. SLPIC is a relatively small modification of standard PIC, with no complexities that might degrade parallel efficiency (compared to PIC), and is similarly compatible with PIC field solvers and boundary conditions.

  6. Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.

    1997-01-01

    As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.

  7. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data.

    PubMed

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2018-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  8. Numerical simulations of flying and swimming of biological systems with the viscous vortex particle method

    NASA Astrophysics Data System (ADS)

    Eldredge, Jeff

    2005-11-01

    Many biological mechanisms of locomotion involve the interaction of a fluid with a deformable surface undergoing large unsteady motion. Analysis of such problems poses a significant challenge to conventional grid-based computational approaches. Particularly in the moderate Reynolds number regime where many insects and fish function, viscous and inertial processes are both important, and vorticity serves a crucial role. In this work, the viscous vortex particle method is shown to provide an efficient, intuitive simulation approach for investigation of these biological systems. In contrast with a grid-based approach, the method solves the Navier--Stokes equations by tracking computational particles that carry smooth blobs of vorticity and exchange strength with one another to account for viscous diffusion. Thus, computational resources are focused on the physically relevant features of the flow, and there is no need for artificial boundary conditions. Building from previously-developed techniques for the creation of vorticity to enforce no-throughflow and no-slip conditions, the present method is extended to problems of coupled fluid--body dynamics by enforcement of global conservation of momenta. The application to several two-dimensional model problems is demonstrated, including single and multiple flapping wings and free swimming of a three-linkage fish.

  9. Vectorization of a particle simulation method for hypersonic rarefied flow

    NASA Technical Reports Server (NTRS)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  10. A particle swarm optimization variant with an inner variable learning strategy.

    PubMed

    Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin

    2014-01-01

    Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.

  11. A High Performance Computing Approach to the Simulation of Fluid Solid Interaction Problems with Rigid and Flexible Components (Open Access Publisher’s Version)

    DTIC Science & Technology

    2014-08-01

    performance computing, smoothed particle hydrodynamics, rigid body dynamics, flexible body dynamics ARMAN PAZOUKI ∗, RADU SERBAN ∗, DAN NEGRUT ∗ A...HIGH PERFORMANCE COMPUTING APPROACH TO THE SIMULATION OF FLUID-SOLID INTERACTION PROBLEMS WITH RIGID AND FLEXIBLE COMPONENTS This work outlines a unified...are implemented to model rigid and flexible multibody dynamics. The two- way coupling of the fluid and solid phases is supported through use of

  12. 3D Discrete element approach to the problem on abutment pressure in a gently dipping coal seam

    NASA Astrophysics Data System (ADS)

    Klishin, S. V.; Revuzhenko, A. F.

    2017-09-01

    Using the discrete element method, the authors have carried out 3D implementation of the problem on strength loss in surrounding rock mass in the vicinity of a production heading and on abutment pressure in a gently dripping coal seam. The calculation of forces at the contacts between particles accounts for friction, rolling resistance and viscosity. Between discrete particles modeling coal seam, surrounding rock mass and broken rocks, an elastic connecting element is introduced to allow simulating coherent materials. The paper presents the kinematic patterns of rock mass deformation, stresses in particles and the graph of the abutment pressure behavior in the coal seam.

  13. Use of .sup.3 He.sup.30 + ICRF minority heating to simulate alpha particle heating

    DOEpatents

    Post, Jr., Douglass E.; Hwang, David Q.; Hovey, Jane

    1986-04-22

    Neutron activation due to high levels of neutron production in a first heated deuterium-tritium plasma is substantially reduced by using Ion Cyclotron Resonance Frequency (ICRF) heating of energetic .sup.3 He.sup.++ ions in a second deuterium-.sup.3 He.sup.++ plasma which exhibit an energy distribution and density similar to that of alpha particles in fusion reactor experiments to simulate fusion alpha particle heating in the first plasma. The majority of the fast .sup.3 He.sup.++ ions and their slowing down spectrum can be studied using either a modulated hydrogen beam source for producing excited states of He.sup.+ in combination with spectrometers or double charge exchange with a high energy neutral lithium beam and charged particle detectors at the plasma edge. The maintenance problems thus associated with neutron activation are substantially reduced permitting energetic alpha particle behavior to be studied in near term large fusion experiments.

  14. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  15. Electron heating in quasi-perpendicular shocks - A Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Veltri, Pierluigi; Mangeney, Andre; Scudder, Jack D.

    1990-01-01

    To study the problem of electron heating in quasi-perpendicular shocks, under the combined effects of 'reversible' motion, in the shock electric potential and magnetic field, and wave-particle interactions a diffusion equation was derived, in the drift (adiabatic) approximation and it was solved by using a Monte Carlo method. The results show that most of the observations can be explained within this framework. The simulation has also definitively shown that the electron parallel temperature is determined by the dc electromagnetic field and not by any wave particle induced heating. Wave-particle interactions are effective in smoothing out the large gradients in phase space produced by the 'reversible' motion of the electrons, thus producing a 'cooling' of the electrons. Some constraints on the wave-particle interaction process may be obtained from a detailed comparison between the simulation and observations. In particular, it appears that the adiabatic approximation must be violated in order to explain the observed evolution of the perpendicular temperature.

  16. Kassiopeia: a modern, extensible C++ particle tracking package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furse, Daniel; Groh, Stefan; Trost, Nikolaus

    The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur inmore » flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle's state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.« less

  17. Steady-state shear flows via nonequilibrium molecular dynamics and smooth-particle applied mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Posch, H.A.; Hoover, W.G.; Kum, O.

    1995-08-01

    We simulate both microscopic and macroscopic shear flows in two space dimensions using nonequilibrium molecular dynamics and smooth-particle applied mechanics. The time-reversible {ital microscopic} equations of motion are isomorphic to the smooth-particle description of inviscid {ital macroscopic} continuum mechanics. The corresponding microscopic particle interactions are relatively weak and long ranged. Though conventional Green-Kubo theory suggests instability or divergence in two-dimensional flows, we successfully define and measure a finite shear viscosity coefficient by simulating stationary plane Couette flow. The special nature of the weak long-ranged smooth-particle functions corresponds to an unusual kind of microscopic transport. This microscopic analog is mainly kinetic,more » even at high density. For the soft Lucy potential which we use in the present work, nearly all the system energy is potential, but the resulting shear viscosity is nearly all kinetic. We show that the measured shear viscosities can be understood, in terms of a simple weak-scattering model, and that this understanding is useful in assessing the usefulness of continuum simulations using the smooth-particle method. We apply that method to the Rayleigh-Benard problem of thermally driven convection in a gravitational field.« less

  18. Kassiopeia: a modern, extensible C++ particle tracking package

    DOE PAGES

    Furse, Daniel; Groh, Stefan; Trost, Nikolaus; ...

    2017-05-16

    The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur inmore » flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle's state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.« less

  19. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  20. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  1. Pairwise Force SPH Model for Real-Time Multi-Interaction Applications.

    PubMed

    Yang, Tao; Martin, Ralph R; Lin, Ming C; Chang, Jian; Hu, Shi-Min

    2017-10-01

    In this paper, we present a novel pairwise-force smoothed particle hydrodynamics (PF-SPH) model to enable simulation of various interactions at interfaces in real time. Realistic capture of interactions at interfaces is a challenging problem for SPH-based simulations, especially for scenarios involving multiple interactions at different interfaces. Our PF-SPH model can readily handle multiple types of interactions simultaneously in a single simulation; its basis is to use a larger support radius than that used in standard SPH. We adopt a novel anisotropic filtering term to further improve the performance of interaction forces. The proposed model is stable; furthermore, it avoids the particle clustering problem which commonly occurs at the free surface. We show how our model can be used to capture various interactions. We also consider the close connection between droplets and bubbles, and show how to animate bubbles rising in liquid as well as bubbles in air. Our method is versatile, physically plausible and easy-to-implement. Examples are provided to demonstrate the capabilities and effectiveness of our approach.

  2. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  3. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  4. CAD tools for detector design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Womersley, J.; DiGiacomo, N.; Killian, K.

    1990-04-01

    Detailed detector design has traditionally been divided between engineering optimization for structural integrity and subsequent physicist evaluation. The availability of CAD systems for engineering design enables the tasks to be integrated by providing tools for particle simulation within the CAD system. We believe this will speed up detector design and avoid problems due to the late discovery of shortcomings in the detector. This could occur because of the slowness of traditional verification techniques (such as detailed simulation with GEANT). One such new particle simulation tool is described. It is being used with the I-DEAS CAD package for SSC detector designmore » at Martin-Marietta Astronautics and is to be released through the SSC Laboratory.« less

  5. Unpredictable convection in a small box: Molecular-dynamics experiments

    NASA Astrophysics Data System (ADS)

    Rapaport, D. C.

    1992-08-01

    The Rayleigh-Bénard problem has been studied using discrete-particle simulation of a two-dimensional fluid in a square box. The presence of temporal periodicity in the convective roll structure was observed, but, more significantly, different simulation runs under identical conditions but with initial states that differed in ways that are seemingly irrelevant at the macroscopic level exhibited very different forms of pattern evolution. The final state always consisted of a horizontally adjacent pair of rolls, but not all initial states evolved to produce well-established periodic behavior, despite the fact that very long runs were undertaken. Results for both hard- and soft-disk fluids are described; the simulations included systems with over 105 particles.

  6. Non-Newtonian particulate flow simulation: A direct-forcing immersed boundary-lattice Boltzmann approach

    NASA Astrophysics Data System (ADS)

    Amiri Delouei, A.; Nazari, M.; Kayhani, M. H.; Kang, S. K.; Succi, S.

    2016-04-01

    In the current study, a direct-forcing immersed boundary-non-Newtonian lattice Boltzmann method (IB-NLBM) is developed to investigate the sedimentation and interaction of particles in shear-thinning and shear-thickening fluids. In the proposed IB-NLBM, the non-linear mechanics of non-Newtonian particulate flows is detected by combination of the most desirable features of immersed boundary and lattice Boltzmann methods. The noticeable roles of non-Newtonian behavior on particle motion, settling velocity and generalized Reynolds number are investigated by simulating benchmark problem of one-particle sedimentation under the same generalized Archimedes number. The effects of extra force due to added accelerated mass are analyzed on the particle motion which have a significant impact on shear-thinning fluids. For the first time, the phenomena of interaction among the particles, such as Drafting, Kissing, and Tumbling in non-Newtonian fluids are investigated by simulation of two-particle sedimentation and twelve-particle sedimentation. The results show that increasing the shear-thickening behavior of fluid leads to a significant increase in the kissing time. Moreover, the transverse position of particles for shear-thinning fluids during the tumbling interval is different from Newtonian and the shear-thickening fluids. The present non-Newtonian particulate study can be applied in several industrial and scientific applications, like the non-Newtonian sedimentation behavior of particles in food industrial and biological fluids.

  7. Shock interaction with deformable particles using a constrained interface reinitialization scheme

    NASA Astrophysics Data System (ADS)

    Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.

    2016-02-01

    In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.

  8. A Comparison of Filter-based Approaches for Model-based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Saha, Bhaskar; Goebel, Kai

    2012-01-01

    Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the stateparameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.

  9. Transient Simulation of Accumulating Particle Deposition in Pipe Flow

    NASA Astrophysics Data System (ADS)

    Hewett, James; Sellier, Mathieu

    2015-11-01

    Colloidal particles that deposit in pipe systems can lead to fouling which is an expensive problem in both the geothermal and oil & gas industries. We investigate the gradual accumulation of deposited colloids in pipe flow using numerical simulations. An Euler-Lagrangian approach is employed for modelling the fluid and particle phases. Particle transport to the pipe wall is modelled with Brownian motion and turbulent diffusion. A two-way coupling exists between the fouled material and the pipe flow; the local mass flux of depositing particles is affected by the surrounding fluid in the near-wall region. This coupling is modelled by changing the cells from fluid to solid as the deposited particles exceed each local cell volume. A similar method has been used to model fouling in engine exhaust systems (Paz et al., Heat Transfer Eng., 34(8-9):674-682, 2013). We compare our deposition velocities and deposition profiles with an experiment on silica scaling in turbulent pipe flow (Kokhanenko et al., 19th AFMC, 2014).

  10. A smoothed particle hydrodynamics framework for modelling multiphase interactions at meso-scale

    NASA Astrophysics Data System (ADS)

    Li, Ling; Shen, Luming; Nguyen, Giang D.; El-Zein, Abbas; Maggi, Federico

    2018-01-01

    A smoothed particle hydrodynamics (SPH) framework is developed for modelling multiphase interactions at meso-scale, including the liquid-solid interaction induced deformation of the solid phase. With an inter-particle force formulation that mimics the inter-atomic force in molecular dynamics, the proposed framework includes the long-range attractions between particles, and more importantly, the short-range repulsive forces to avoid particle clustering and instability problems. Three-dimensional numerical studies have been conducted to demonstrate the capabilities of the proposed framework to quantitatively replicate the surface tension of water, to model the interactions between immiscible liquids and solid, and more importantly, to simultaneously model the deformation of solid and liquid induced by the multiphase interaction. By varying inter-particle potential magnitude, the proposed SPH framework has successfully simulated various wetting properties ranging from hydrophobic to hydrophilic surfaces. The simulation results demonstrate the potential of the proposed framework to genuinely study complex multiphase interactions in wet granular media.

  11. Simulation of Ejecta Production and Mixing Process of Sn Sample under shock loading

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Chen, Dawei; Sun, Haiquan; Ma, Dongjun

    2017-06-01

    Ejection may occur when a strong shock wave release at the free surface of metal material and the ejecta of high-speed particulate matter will be formed and further mixed with the surrounding gas. Ejecta production and its mixing process has been one of the most difficult problems in shock physics remain unresolved, and have many important engineering applications in the imploding compression science. The present paper will introduce a methodology for the theoretical modeling and numerical simulation of the complex ejection and mixing process. The ejecta production is decoupled with the particle mixing process, and the ejecta state can be achieved by the direct numerical simulation for the evolution of initial defect on the metal surface. Then the particle mixing process can be simulated and resolved by a two phase gas-particle model which uses the aforementioned ejecta state as the initial condition. A preliminary ejecta experiment of planar Sn metal Sample has validated the feasibility of the proposed methodology.

  12. Discrete Particle Swarm Optimization Routing Protocol for Wireless Sensor Networks with Multiple Mobile Sinks.

    PubMed

    Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming

    2016-07-14

    Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption.

  13. Radial particle distributions in PARMILA simulation beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boicourt, G.P.

    1984-03-01

    The estimation of beam spill in particle accelerators is becoming of greater importance as higher current designs are being funded. To the present, no numerical method for predicting beam-spill has been available. In this paper, we present an approach to the loss-estimation problem that uses probability distributions fitted to particle-simulation beams. The properties of the PARMILA code's radial particle distribution are discussed, and a broad class of probability distributions are examined to check their ability to fit it. The possibility that the PARMILA distribution is a mixture is discussed, and a fitting distribution consisting of a mixture of two generalizedmore » gamma distributions is found. An efficient algorithm to accomplish the fit is presented. Examples of the relative prediction of beam spill are given. 26 references, 18 figures, 1 table.« less

  14. Random walk, diffusion and mixing in simulations of scalar transport in fluid flows

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2008-12-01

    Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.

  15. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    NASA Astrophysics Data System (ADS)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  16. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  17. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  18. Monte Carlo N-Particle Tracking of Ultrafine Particle Flow in Bent Micro-Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casella, Andrew M.; Loyalka, Sudarsham K.

    2016-02-16

    The problem of large pressure-differential driven laminar convective-diffusive ultrafine aerosol flow through bent micro-tubes is of interest in several contemporary research areas including; release of contents from pressurized containment vessels, aerosol sampling equipment, advanced scientific instruments, gas-phase micro-heat exchangers, and microfluidic devices. In each of these areas, the predominant problem is the determination of the fraction of particles entering the micro-tube that is deposited within the tube and the fraction that is transmitted through. Due to the extensive parameter restrictions of this class of problems, a Lagrangian particle tracking method making use of the coupling of the analytical stream linemore » solutions of Dean and the simplified Langevin equation is quite a useful tool in problem characterization. This method is a direct analog to the Monte Carlo N-Particle method of particle transport extensively used in nuclear physics and engineering. In this work, 10 nm diameter particles with a density of 1 g/cm3 are tracked within micro-tubes with toroidal bends with pressure differentials ranging between 0.2175 and 0.87 atmospheres. The tubes have radii of 25 microns and 50 microns and the radius of curvature is between 1 m and 0.3183 cm. The carrier gas is helium, and temperatures of 298 K and 558 K are considered. Numerical convergence is considered as a function of time step size and of the number of particles per simulation. Particle transmission rates and deposition patterns within the bent micro-tubes are calculated.« less

  19. Examining the accuracy of astrophysical disk simulations with a generalized hydrodynamical test problem [The role of pressure and viscosity in SPH simulations of astrophysical disks

    DOE PAGES

    Raskin, Cody; Owen, J. Michael

    2016-10-24

    Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less

  20. Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Broecker, Peter; Trebst, Simon

    2016-08-01

    Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.

  1. Fast Multipole Methods for Three-Dimensional N-body Problems

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.

    1995-01-01

    We are developing computational tools for the simulations of three-dimensional flows past bodies undergoing arbitrary motions. High resolution viscous vortex methods have been developed that allow for extended simulations of two-dimensional configurations such as vortex generators. Our objective is to extend this methodology to three dimensions and develop a robust computational scheme for the simulation of such flows. A fundamental issue in the use of vortex methods is the ability of employing efficiently large numbers of computational elements to resolve the large range of scales that exist in complex flows. The traditional cost of the method scales as Omicron (N(sup 2)) as the N computational elements/particles induce velocities at each other, making the method unacceptable for simulations involving more than a few tens of thousands of particles. In the last decade fast methods have been developed that have operation counts of Omicron (N log N) or Omicron (N) (referred to as BH and GR respectively) depending on the details of the algorithm. These methods are based on the observation that the effect of a cluster of particles at a certain distance may be approximated by a finite series expansion. In order to exploit this observation we need to decompose the element population spatially into clusters of particles and build a hierarchy of clusters (a tree data structure) - smaller neighboring clusters combine to form a cluster of the next size up in the hierarchy and so on. This hierarchy of clusters allows one to determine efficiently when the approximation is valid. This algorithm is an N-body solver that appears in many fields of engineering and science. Some examples of its diverse use are in astrophysics, molecular dynamics, micro-magnetics, boundary element simulations of electromagnetic problems, and computer animation. More recently these N-body solvers have been implemented and applied in simulations involving vortex methods. Koumoutsakos and Leonard (1995) implemented the GR scheme in two dimensions for vector computer architectures allowing for simulations of bluff body flows using millions of particles. Winckelmans presented three-dimensional, viscous simulations of interacting vortex rings, using vortons and an implementation of a BH scheme for parallel computer architectures. Bhatt presented a vortex filament method to perform inviscid vortex ring interactions, with an alternative implementation of a BH scheme for a Connection Machine parallel computer architecture.

  2. A Simulation to Study Speed Distributions in a Solar Plasma

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Alvarellos, Jose Luis

    1999-01-01

    We have carried out a numerical simulation of a plasma with characteristics similar to those found in the core of the Sun. Particular emphasis is placed on the Coulomb interaction between the ions and electrons, which could result in a relative velocity distribution different from the Maxwell-Boltzmann (MB) distribution generally assumed for a plasma. The fact that the distribution may not exactly follow the MB distribution could have very important consequences for a variety of problems in solar physics, especially the neutrino problem. Very briefly, the neutrino problem is that the observed neutrino detections from the Sun are smaller than what the standard solar theory predicts. In Section I we introduce the problem and in section II we discuss the approach to try to solve the problem: i.e., a molecular dynamics approach. In section III we provide details about the integration method, and any simplifications that can be applied to the problem. In section IV (the core of this report) we state our results. First for the specific case of 1000 particles and then for other cases with different number of particles. In section V we summarize our findings and state our conclusions. Sections VI VII and VIII provide the list of figures, reference material and acknowledgements respectively.

  3. Estimation of the particle concentration in hydraulic liquid by the in-line automatic particle counter based on the CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kornilin, Dmitriy V.; Kudryavtsev, Ilya A.; McMillan, Alison J.; Osanlou, Ardeshir; Ratcliffe, Ian

    2017-06-01

    Modern hydraulic systems should be monitored on the regular basis. One of the most effective ways to address this task is utilizing in-line automatic particle counters (APC) built inside of the system. The measurement of particle concentration in hydraulic liquid by APC is crucial because increasing numbers of particles should mean functional problems. Existing automatic particle counters have significant limitation for the precise measurement of relatively low concentration of particle in aerospace systems or they are unable to measure higher concentration in industrial ones. Both issues can be addressed by implementation of the CMOS image sensor instead of single photodiode used in the most of APC. CMOS image sensor helps to overcome the problem of the errors in volume measurement caused by inequality of particle speed inside of tube. Correction is based on the determination of the particle position and parabolic velocity distribution profile. Proposed algorithms are also suitable for reducing the errors related to the particles matches in measurement volume. The results of simulation show that the accuracy increased up to 90 per cent and the resolution improved ten times more compared to the single photodiode sensor.

  4. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  5. Fiber Bragg grating filter using evaporated induced self assembly of silica nano particles

    NASA Astrophysics Data System (ADS)

    Hammarling, Krister; Zhang, Renyung; Manuilskiy, Anatoliy; Nilsson, Hans-Erik

    2014-03-01

    In the present work we conduct a study of fiber filters produced by evaporation of silica particles upon a MM-fiber core. A band filter was designed and theoretically verified using a 2D Comsol simulation model of a 3D problem, and calculated in the frequency domain in respect to refractive index. The fiber filters were fabricated by stripping and chemically etching the middle part of an MM-fiber until the core was exposed. A mono layer of silica nano particles were evaporated on the core using an Evaporation Induced Self-Assembly (EISA) method. The experimental results indicated a broader bandwidth than indicated by the simulations which can be explained by the mismatch in the particle size distributions, uneven particle packing and finally by effects from multiple mode angles. Thus, there are several closely connected Bragg wavelengths that build up the broader bandwidth. The experimental part shows that it is possible by narrowing the particle size distributing and better control of the particle packing, the filter effectiveness can be greatly improved.

  6. Effects of damage and thermal residual stresses on the overall elastoplastic behavior of particle-reinforced metal matrix composites

    NASA Astrophysics Data System (ADS)

    Liu, Haitao

    The objective of the present study is to investigate damage mechanisms and thermal residual stresses of composites, and to establish the frameworks to model the particle-reinforced metal matrix composites with particle-matrix interfacial debonding, particle cracking or thermal residual stresses. An evolutionary interfacial debonding model is proposed for the composites with spheroidal particles. The construction of the equivalent stiffness is based on the fact that when debonding occurs in a certain direction, the load-transfer ability will lose in that direction. By using this equivalent method, the interfacial debonding problem can be converted into a composite problem with perfectly bonded inclusions. Considering the interfacial debonding is a progressive process in which the debonding area increases in proportion to external loading, a progressive interfacial debonding model is proposed. In this model, the relation between external loading and the debonding area is established using a normal stress controlled debonding criterion. Furthermore, an equivalent orthotropic stiffness tensor is constructed based on the debonding areas. This model is able to study the composites with randomly distributed spherical particles. The double-inclusion theory is recalled to model the particle cracking problems. Cracks inside particles are treated as penny-shape particles with zero stiffness. The disturbed stress field due to the existence of a double-inclusion is expressed explicitly. Finally, a thermal mismatch eigenstrain is introduced to simulate the inconsistent expansions of the matrix and the particles due to the difference of the coefficients of thermal expansion. Micromechanical stress and strain fields are calculated due to the combination of applied external loads and the prescribed thermal mismatch eigenstrains. For all of the above models, ensemble-volume averaging procedures are employed to derive the effective yield function of the composites. Numerical simulations are performed to analyze the effects of various parameters and several good agreements between our model's predictions and experimental results are obtained. It should be mentioned that all of expressions in the frameworks are explicitly derived and these analytical results are easy to be adopted in other related investigations.

  7. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  8. Monte Carlo simulations of particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.

    1994-01-01

    The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.

  9. Astronomical Constraints on Quantum Cold Dark Matter

    NASA Astrophysics Data System (ADS)

    Spivey, Shane; Musielak, Z.; Fry, J.

    2012-01-01

    A model of quantum (`fuzzy') cold dark matter that accounts for both the halo core problem and the missing dwarf galaxies problem, which plague the usual cold dark matter paradigm, is developed. The model requires that a cold dark matter particle has a mass so small that its only allowed physical description is a quantum wave function. Each such particle in a galactic halo is bound to a gravitational potential that is created by luminous matter and by the halo itself, and the resulting wave function is described by a Schrödinger equation. To solve this equation on a galactic scale, we impose astronomical constraints that involve several density profiles used to fit data from simulations of dark matter galactic halos. The solutions to the Schrödinger equation are quantum waves which resemble the density profiles acquired from simulations, and they are used to determine the mass of the cold dark matter particle. The effects of adding certain types of baryonic matter to the halo, such as a dwarf elliptical galaxy or a supermassive black hole, are also discussed.

  10. Particle-In-Cell simulations of high pressure plasmas using graphics processing units

    NASA Astrophysics Data System (ADS)

    Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter

    2009-10-01

    Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.

  11. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  12. A sharp interface Cartesian grid method for viscous simulation of shocked particle-laden flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-09-01

    A Cartesian grid-based sharp interface method is presented for viscous simulations of shocked particle-laden flows. The moving solid-fluid interfaces are represented using level sets. A moving least-squares reconstruction is developed to apply the no-slip boundary condition at solid-fluid interfaces and to supply viscous stresses to the fluid. The algorithms developed in this paper are benchmarked against similarity solutions for the boundary layer over a fixed flat plate and against numerical solutions for moving interface problems such as shock-induced lift-off of a cylinder in a channel. The framework is extended to 3D and applied to calculate low Reynolds number steady supersonic flow over a sphere. Viscous simulation of the interaction of a particle cloud with an incident planar shock is demonstrated; the average drag on the particles and the vorticity field in the cloud are compared to the inviscid case to elucidate the effects of viscosity on momentum transfer between the particle and fluid phases. The methods developed will be useful for obtaining accurate momentum and heat transfer closure models for macro-scale shocked particulate flow applications such as blast waves and dust explosions.

  13. Kassiopeia: a modern, extensible C++ particle tracking package

    NASA Astrophysics Data System (ADS)

    Furse, Daniel; Groh, Stefan; Trost, Nikolaus; Babutzka, Martin; Barrett, John P.; Behrens, Jan; Buzinsky, Nicholas; Corona, Thomas; Enomoto, Sanshiro; Erhard, Moritz; Formaggio, Joseph A.; Glück, Ferenc; Harms, Fabian; Heizmann, Florian; Hilk, Daniel; Käfer, Wolfgang; Kleesiek, Marco; Leiber, Benjamin; Mertens, Susanne; Oblath, Noah S.; Renschler, Pascal; Schwarz, Johannes; Slocum, Penny L.; Wandkowsky, Nancy; Wierman, Kevin; Zacher, Michael

    2017-05-01

    The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur in flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle’s state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.

  14. Collisional PIC Simulations of Particles in Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Peter, William

    2003-10-01

    Because of the long range of Coloumb forces, collisions with distant particles in plasmas are more important than collisions with near neighbors. In addition, many problems in space physics and magnetic confinement include regions of weak magnetic field where the MHD approximation breaks down. A particle-in-cell code based on the quiet direct simulation Monte-Carlo method(B. J. Albright, W. Daughton, D. Lemons, D. Winske, and M. E. Jones, Physics of Plasmas) 9, 1898 (2002). is being developed to study collisional (e.g., ν ˜ Ω) particle motion in magnetic fields. Primary application is to energetic particle loss in the radiation belts(K. Papadopoulos, COSPAR Meeting, Houston, TX, Oct., 2002.) at a given energy and L-shell. Other applications include trapping in rotating field-reversed configurations(N. Rostoker and A. Qerushi, Physics of Plasmas) 9, 3057 (2002)., and electron behavior in magnetic traps(V. Gorgadze, T. Pasquini, J. S. Wurtele, and J. Fajans, Bull. Am. Phys. Soc.) 47, 127 (2002).. The use of the random time-step method(W. Peter, Bull. Am. Phys. Soc.) 47, 52 (2002). to decrease simulation times by 1-2 orders of magnitude is also being studied.

  15. Brownian Dynamics simulations of model colloids in channel geometries and external fields

    NASA Astrophysics Data System (ADS)

    Siems, Ullrich; Nielaba, Peter

    2018-04-01

    We review the results of Brownian Dynamics simulations of colloidal particles in external fields confined in channels. Super-paramagnetic Brownian particles are well suited two- dimensional model systems for a variety of problems on different length scales, ranging from pedestrian walking through a bottleneck to ions passing ion-channels in living cells. In such systems confinement into channels can have a great influence on the diffusion and transport properties. Especially we will discuss the crossover from single file diffusion in a narrow channel to the diffusion in the extended two-dimensional system. Therefore a new algorithm for computing the mean square displacement (MSD) on logarithmic time scales is presented. In a different study interacting colloidal particles were dragged over a washboard potential and are additionally confined in a two-dimensional micro-channel. In this system kink and anti-kink solitons determine the depinning process of the particles from the periodic potential.

  16. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  17. Noiseless Vlasov-Poisson simulations with linearly transformed particles

    DOE PAGES

    Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...

    2014-06-25

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less

  18. Suppression of Space Charge Induced Beam Halo in Nonlinear Focusing Channel

    DOE PAGES

    Batygin, Yuri Konstantinovich; Scheinker, Alexander; Kurennoy, Sergey; ...

    2016-01-29

    An intense non-uniform particle beam exhibits strong emittance growth and halo formation in focusing channels due to nonlinear space charge forces of the beam. This phenomenon limits beam brightness and results in particle losses. The problem is connected with irreversible distortion of phase space volume of the beam in conventional focusing structures due to filamentation in phase space. Emittance growth is accompanied by halo formation in real space, which results in inevitable particle losses. We discuss a new approach for solving a self-consistent problem for a matched non-uniform beam in two-dimensional geometry. The resulting solution is applied to the problemmore » of beam transport, while avoiding emittance growth and halo formation by the use of nonlinear focusing field. Conservation of a beam distribution function is demonstrated analytically and by particle-in-cell simulation for a beam with a realistic beam distribution.« less

  19. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  20. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  1. Real-Time Investigation of Solidification of Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Kaukler, William; Sen, Subhayu

    1999-01-01

    Casting of metal matrix composites can develop imperfections either as non- uniform distributions of the reinforcement phases or as outright defects such as porosity. The solidification process itself initiates these problems. To identify or rectify the problems, one must be able to detect and to study how they form. Until, recently this was only possible by experiments that employed transparent metal model organic materials with glass beads to simulate the reinforcing phases. Recent results obtained from a Space Shuttle experiment (using transparent materials) will be used to illustrate the fundamental physics that dictates the final distribution of agglomerates in a casting. We have further extended this real time investigation to aluminum alloys using X-ray microscopy. A variety of interface-particle interactions will be discussed and how they alter the final properties of the composite. A demonstration of how a solid-liquid interface is distorted by nearby voids or particles, particle pushing or engulfment by the interface, formations of wormholes, Aggregation of particles, and particle-induced segregation of alloying elements will be presented.

  2. Application of stochastic weighted algorithms to a multidimensional silica particle model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less

  3. Airflow and particle deposition simulations in health and emphysema: from in vivo to in silico animal experiments.

    PubMed

    Oakes, Jessica M; Marsden, Alison L; Grandmont, Celine; Shadden, Shawn C; Darquenne, Chantal; Vignon-Clementel, Irene E

    2014-04-01

    Image-based in silico modeling tools provide detailed velocity and particle deposition data. However, care must be taken when prescribing boundary conditions to model lung physiology in health or disease, such as in emphysema. In this study, the respiratory resistance and compliance were obtained by solving an inverse problem; a 0D global model based on healthy and emphysematous rat experimental data. Multi-scale CFD simulations were performed by solving the 3D Navier-Stokes equations in an MRI-derived rat geometry coupled to a 0D model. Particles with 0.95 μm diameter were tracked and their distribution in the lung was assessed. Seven 3D-0D simulations were performed: healthy, homogeneous, and five heterogeneous emphysema cases. Compliance (C) was significantly higher (p = 0.04) in the emphysematous rats (C = 0.37 ± 0.14 cm(3)/cmH2O) compared to the healthy rats (C = 0.25 ± 0.04 cm(3)/cmH2O), while the resistance remained unchanged (p = 0.83). There were increases in airflow, particle deposition in the 3D model, and particle delivery to the diseased regions for the heterogeneous cases compared to the homogeneous cases. The results highlight the importance of multi-scale numerical simulations to study airflow and particle distribution in healthy and diseased lungs. The effect of particle size and gravity were studied. Once available, these in silico predictions may be compared to experimental deposition data.

  4. An efficient multi-dimensional implementation of VSIAM3 and its applications to free surface flows

    NASA Astrophysics Data System (ADS)

    Yokoi, Kensuke; Furuichi, Mikito; Sakai, Mikio

    2017-12-01

    We propose an efficient multidimensional implementation of VSIAM3 (volume/surface integrated average-based multi-moment method). Although VSIAM3 is a highly capable fluid solver based on a multi-moment concept and has been used for a wide variety of fluid problems, VSIAM3 could not simulate some simple benchmark problems well (for instance, lid-driven cavity flows) due to relatively high numerical viscosity. In this paper, we resolve the issue by using the efficient multidimensional approach. The proposed VSIAM3 is shown to capture lid-driven cavity flows of the Reynolds number up to Re = 7500 with a Cartesian grid of 128 × 128, which was not capable for the original VSIAM3. We also tested the proposed framework in free surface flow problems (droplet collision and separation of We = 40 and droplet splashing on a superhydrophobic substrate). The numerical results by the proposed VSIAM3 showed reasonable agreements with these experiments. The proposed VSIAM3 could capture droplet collision and separation of We = 40 with a low numerical resolution (8 meshes for the initial diameter of droplets). We also simulated free surface flows including particles toward non-Newtonian flow applications. These numerical results have showed that the proposed VSIAM3 can robustly simulate interactions among air, particles (solid), and liquid.

  5. Progress on H5Part: A Portable High Performance Parallel DataInterface for Electromagnetics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt

    Significant problems facing all experimental andcomputationalsciences arise from growing data size and complexity. Commonto allthese problems is the need to perform efficient data I/O ondiversecomputer architectures. In our scientific application, thelargestparallel particle simulations generate vast quantitiesofsix-dimensional data. Such a simulation run produces data foranaggregate data size up to several TB per run. Motived by the needtoaddress data I/O and access challenges, we have implemented H5Part,anopen source data I/O API that simplifies the use of the HierarchicalDataFormat v5 library (HDF5). HDF5 is an industry standard forhighperformance, cross-platform data storage and retrieval that runsonall contemporary architectures from large parallel supercomputerstolaptops. H5Part, whichmore » is oriented to the needs of the particlephysicsand cosmology communities, provides support for parallelstorage andretrieval of particles, structured and in the future unstructuredmeshes.In this paper, we describe recent work focusing on I/O supportforparticles and structured meshes and provide data showing performance onmodernsupercomputer architectures like the IBM POWER 5.« less

  6. A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations

    NASA Astrophysics Data System (ADS)

    Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica

    2012-02-01

    We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.

  7. Brownian aggregation rate of colloid particles with several active sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekrasov, Vyacheslav M.; Yurkin, Maxim A.; Chernyshev, Andrei V., E-mail: chern@ns.kinetics.nsc.ru

    2014-08-14

    We theoretically analyze the aggregation kinetics of colloid particles with several active sites. Such particles (so-called “patchy particles”) are well known as chemically anisotropic reactants, but the corresponding rate constant of their aggregation has not yet been established in a convenient analytical form. Using kinematic approximation for the diffusion problem, we derived an analytical formula for the diffusion-controlled reaction rate constant between two colloid particles (or clusters) with several small active sites under the following assumptions: the relative translational motion is Brownian diffusion, and the isotropic stochastic reorientation of each particle is Markovian and arbitrarily correlated. This formula was shownmore » to produce accurate results in comparison with more sophisticated approaches. Also, to account for the case of a low number of active sites per particle we used Monte Carlo stochastic algorithm based on Gillespie method. Simulations showed that such discrete model is required when this number is less than 10. Finally, we applied the developed approach to the simulation of immunoagglutination, assuming that the formed clusters have fractal structure.« less

  8. Multi-Objective Bidding Strategy for Genco Using Non-Dominated Sorting Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Saksinchai, Apinat; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes a multi-objective bidding strategy for a generation company (GenCo) in uniform price spot market using non-dominated sorting particle swarm optimization (NSPSO). Instead of using a tradeoff technique, NSPSO is introduced to solve the multi-objective strategic bidding problem considering expected profit maximization and risk (profit variation) minimization. Monte Carlo simulation is employed to simulate rivals' bidding behavior. Test results indicate that the proposed approach can provide the efficient non-dominated solution front effectively. In addition, it can be used as a decision making tool for a GenCo compromising between expected profit and price risk in spot market.

  9. CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres

    NASA Astrophysics Data System (ADS)

    Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli

    2017-09-01

    CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.

  10. Hybrid particle-field molecular dynamics simulation for polyelectrolyte systems.

    PubMed

    Zhu, You-Liang; Lu, Zhong-Yuan; Milano, Giuseppe; Shi, An-Chang; Sun, Zhao-Yan

    2016-04-14

    To achieve simulations on large spatial and temporal scales with high molecular chemical specificity, a hybrid particle-field method was proposed recently. This method is developed by combining molecular dynamics and self-consistent field theory (MD-SCF). The MD-SCF method has been validated by successfully predicting the experimentally observable properties of several systems. Here we propose an efficient scheme for the inclusion of electrostatic interactions in the MD-SCF framework. In this scheme, charged molecules are interacting with the external fields that are self-consistently determined from the charge densities. This method is validated by comparing the structural properties of polyelectrolytes in solution obtained from the MD-SCF and particle-based simulations. Moreover, taking PMMA-b-PEO and LiCF3SO3 as examples, the enhancement of immiscibility between the ion-dissolving block and the inert block by doping lithium salts into the copolymer is examined by using the MD-SCF method. By employing GPU-acceleration, the high performance of the MD-SCF method with explicit treatment of electrostatics facilitates the simulation study of many problems involving polyelectrolytes.

  11. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  12. A stochastic framework for spot-scanning particle therapy.

    PubMed

    Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle

    2016-08-01

    In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.

  13. Computational study of nonlinear plasma waves. [plasma simulation model applied to electrostatic waves in collisionless plasma

    NASA Technical Reports Server (NTRS)

    Matsuda, Y.

    1974-01-01

    A low-noise plasma simulation model is developed and applied to a series of linear and nonlinear problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. It is demonstrated that use of the hybrid simulation model allows economical studies to be carried out in both the linear and nonlinear regimes with better quantitative results, for comparable computing time, than can be obtained by conventional particle simulation models, or direct solution of the Vlasov equation. The characteristics of the hybrid simulation model itself are first investigated, and it is shown to be capable of verifying the theoretical linear dispersion relation at wave energy levels as low as .000001 of the plasma thermal energy. Having established the validity of the hybrid simulation model, it is then used to study the nonlinear dynamics of monochromatic wave, sideband instability due to trapped particles, and satellite growth.

  14. A PSO-Based Hybrid Metaheuristic for Permutation Flowshop Scheduling Problems

    PubMed Central

    Zhang, Le; Wu, Jinnan

    2014-01-01

    This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature. PMID:24672389

  15. A PSO-based hybrid metaheuristic for permutation flowshop scheduling problems.

    PubMed

    Zhang, Le; Wu, Jinnan

    2014-01-01

    This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature.

  16. Smoothed dissipative particle dynamics with angular momentum conservation

    NASA Astrophysics Data System (ADS)

    Müller, Kathrin; Fedosov, Dmitry A.; Gompper, Gerhard

    2015-01-01

    Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPD formulation (SDPD+a) is directly derived from the Navier-Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor-Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.

  17. EXAMINING THE ACCURACY OF ASTROPHYSICAL DISK SIMULATIONS WITH A GENERALIZED HYDRODYNAMICAL TEST PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskin, Cody; Owen, J. Michael, E-mail: raskin1@llnl.gov, E-mail: mikeowen@llnl.gov

    2016-11-01

    We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension ofmore » SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less

  18. Instabilities and Turbulence Generation by Pick-Up Ion Distributions in the Outer Heliosheath

    NASA Astrophysics Data System (ADS)

    Weichman, K.; Roytershteyn, V.; Delzanno, G. L.; Pogorelov, N.

    2017-12-01

    Pick-up ions (PUIs) play a significant role in the dynamics of the heliosphere. One problem that has attracted significant attention is the stability of ring-like distributions of PUIs and the electromagnetic fluctuations that could be generated by PUI distributions. For example, PUI stability is relevant to theories attempting to identify the origins of the IBEX ribbon. PUIs have previously been investigated by linear stability analysis of model (e.g. Gaussian) rings and corresponding computer simulations. The majority of these simulations utilized particle-in-cell methods which suffer from accuracy limitations imposed by the statistical noise associated with representing the plasma by a relatively small number of computational particles. In this work, we utilize highly accurate spectral Vlasov simulations conducted using the fully kinetic implicit code SPS (Spectral Plasma Solver) to investigate the PUI distributions inferred from a global heliospheric model (Heerikhuisen et al., 2016). Results are compared with those obtained by hybrid and fully kinetic particle-in-cell methods.

  19. Particle-based simulations of polarity establishment reveal stochastic promotion of Turing pattern formation

    PubMed Central

    Ramirez, Samuel A.; Elston, Timothy C.

    2018-01-01

    Polarity establishment, the spontaneous generation of asymmetric molecular distributions, is a crucial component of many cellular functions. Saccharomyces cerevisiae (yeast) undergoes directed growth during budding and mating, and is an ideal model organism for studying polarization. In yeast and many other cell types, the Rho GTPase Cdc42 is the key molecular player in polarity establishment. During yeast polarization, multiple patches of Cdc42 initially form, then resolve into a single front. Because polarization relies on strong positive feedback, it is likely that the amplification of molecular-level fluctuations underlies the generation of multiple nascent patches. In the absence of spatial cues, these fluctuations may be key to driving polarization. Here we used particle-based simulations to investigate the role of stochastic effects in a Turing-type model of yeast polarity establishment. In the model, reactions take place either between two molecules on the membrane, or between a cytosolic and a membrane-bound molecule. Thus, we developed a computational platform that explicitly simulates molecules at and near the cell membrane, and implicitly handles molecules away from the membrane. To evaluate stochastic effects, we compared particle simulations to deterministic reaction-diffusion equation simulations. Defining macroscopic rate constants that are consistent with the microscopic parameters for this system is challenging, because diffusion occurs in two dimensions and particles exchange between the membrane and cytoplasm. We address this problem by empirically estimating macroscopic rate constants from appropriately designed particle-based simulations. Ultimately, we find that stochastic fluctuations speed polarity establishment and permit polarization in parameter regions predicted to be Turing stable. These effects can operate at Cdc42 abundances expected of yeast cells, and promote polarization on timescales consistent with experimental results. To our knowledge, our work represents the first particle-based simulations of a model for yeast polarization that is based on a Turing mechanism. PMID:29529021

  20. PMMA Third-Body Wear after Unicondylar Knee Arthroplasty Decuples the UHMWPE Wear Particle Generation In Vitro

    PubMed Central

    Paulus, Alexander Christoph; Franke, Manja; Kraxenberger, Michael; Schröder, Christian; Jansson, Volkmar

    2015-01-01

    Introduction. Overlooked polymethylmethacrylate after unicondylar knee arthroplasty can be a potential problem, since this might influence the generated wear particle size and morphology. The aim of this study was the analysis of polyethylene wear in a knee wear simulator for changes in size, morphology, and particle number after the addition of third-bodies. Material and Methods. Fixed bearing unicondylar knee prostheses (UKA) were tested in a knee simulator for 5.0 million cycles. Following bone particles were added for 1.5 million cycles, followed by 1.5 million cycles with PMMA particles. A particle analysis by scanning electron microscopy of the lubricant after the cycles was performed. Size and morphology of the generated wear were characterized. Further, the number of particles per 1 million cycles was calculated for each group. Results. The particles of all groups were similar in size and shape. The number of particles in the PMMA group showed 10-fold higher values than in the bone and control group (PMMA: 10.251 × 1012; bone: 1.145 × 1012; control: 1.804 × 1012). Conclusion. The addition of bone or PMMA particles in terms of a third-body wear results in no change of particle size and morphology. PMMA third-bodies generated tenfold elevated particle numbers. This could favor an early aseptic loosening. PMID:25866795

  1. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  2. Fortran interface layer of the framework for developing particle simulator FDPS

    NASA Astrophysics Data System (ADS)

    Namekata, Daisuke; Iwasawa, Masaki; Nitadori, Keigo; Tanikawa, Ataru; Muranushi, Takayuki; Wang, Long; Hosono, Natsuki; Nomura, Kentaro; Makino, Junichiro

    2018-06-01

    Numerical simulations based on particle methods have been widely used in various fields including astrophysics. To date, various versions of simulation software have been developed by individual researchers or research groups in each field, through a huge amount of time and effort, even though the numerical algorithms used are very similar. To improve the situation, we have developed a framework, called FDPS (Framework for Developing Particle Simulators), which enables researchers to develop massively parallel particle simulation codes for arbitrary particle methods easily. Until version 3.0, FDPS provided an API (application programming interface) for the C++ programming language only. This limitation comes from the fact that FDPS is developed using the template feature in C++, which is essential to support arbitrary data types of particle. However, there are many researchers who use Fortran to develop their codes. Thus, the previous versions of FDPS require such people to invest much time to learn C++. This is inefficient. To cope with this problem, we developed a Fortran interface layer in FDPS, which provides API for Fortran. In order to support arbitrary data types of particle in Fortran, we design the Fortran interface layer as follows. Based on a given derived data type in Fortran representing particle, a PYTHON script provided by us automatically generates a library that manipulates the C++ core part of FDPS. This library is seen as a Fortran module providing an API of FDPS from the Fortran side and uses C programs internally to interoperate Fortran with C++. In this way, we have overcome several technical issues when emulating a `template' in Fortran. Using the Fortran interface, users can develop all parts of their codes in Fortran. We show that the overhead of the Fortran interface part is sufficiently small and a code written in Fortran shows a performance practically identical to the one written in C++.

  3. Development of Modeling and Simulation for Magnetic Particle Inspection Using Finite Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jun-Youl

    2003-01-01

    Magnetic particle inspection (MPI) is a widely used nondestructive inspection method for aerospace applications essentially limited to experiment-based approaches. The analysis of MPI characteristics that affect sensitivity and reliability contributes not only reductions in inspection design cost and time but also improvement of analysis of experimental data. Magnetic particles are easily attracted toward a high magnetic field gradient. Selection of a magnetic field source, which produces a magnetic field gradient large enough to detect a defect in a test sample or component, is an important factor in magnetic particle inspection. In this work a finite element method (FEM) has beenmore » employed for numerical calculation of the MPI simulation technique. The FEM method is known to be suitable for complicated geometries such as defects in samples. This thesis describes the research that is aimed at providing a quantitative scientific basis for magnetic particle inspection. A new FEM solver for MPI simulation has been developed in this research for not only nonlinear reversible permeability materials but also irreversible hysteresis materials that are described by the Jiles-Atherton model. The material is assumed to have isotropic ferromagnetic properties in this research (i.e., the magnetic properties of the material are identical in all directions in a single crystal). In the research, with a direct current field mode, an MPI situation has been simulated to measure the estimated volume of magnetic particles around defect sites before and after removing any external current fields. Currently, this new MPI simulation package is limited to solving problems with the single current source from either a solenoid or an axial directional current rod.« less

  4. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  5. Disaggregation and separation dynamics of magnetic particles in a microfluidic flow under an alternating gradient magnetic field

    NASA Astrophysics Data System (ADS)

    Cao, Quanliang; Li, Zhenhao; Wang, Zhen; Qi, Fan; Han, Xiaotao

    2018-05-01

    How to prevent particle aggregation in the magnetic separation process is of great importance for high-purity separation, while it is a challenging issue in practice. In this work, we report a novel method to solve this problem for improving the selectivity of size-based separation by use of a gradient alternating magnetic field. The specially designed magnetic field is capable of dynamically adjusting the magnetic field direction without changing the direction of magnetic gradient force acting on the particles. Using direct numerical simulations, we show that particles within a certain center-to-center distance are inseparable under a gradient static magnetic field since they are easy aggregated and then start moving together. By contrast, it has been demonstrated that alternating repulsive and attractive interaction forces between particles can be generated to avoid the formation of aggregations when the alternating gradient magnetic field with a given alternating frequency is applied, enabling these particles to be continuously separated based on size-dependent properties. The proposed magnetic separation method and simulation results have the significance for fundamental understanding of particle dynamic behavior and improving the separation efficiency.

  6. A variational multiscale method for particle-cloud tracking in turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Takizawa, K.; Tezduyar, T. E.; Venturini, P.

    2014-11-01

    We present a computational method for simulation of particle-laden flows in turbomachinery. The method is based on a stabilized finite element fluid mechanics formulation and a finite element particle-cloud tracking method. We focus on induced-draft fans used in process industries to extract exhaust gases in the form of a two-phase fluid with a dispersed solid phase. The particle-laden flow causes material wear on the fan blades, degrading their aerodynamic performance, and therefore accurate simulation of the flow would be essential in reliable computational turbomachinery analysis and design. The turbulent-flow nature of the problem is dealt with a Reynolds-Averaged Navier-Stokes model and Streamline-Upwind/Petrov-Galerkin/Pressure-Stabilizing/Petrov-Galerkin stabilization, the particle-cloud trajectories are calculated based on the flow field and closure models for the turbulence-particle interaction, and one-way dependence is assumed between the flow field and particle dynamics. We propose a closure model utilizing the scale separation feature of the variational multiscale method, and compare that to the closure utilizing the eddy viscosity model. We present computations for axial- and centrifugal-fan configurations, and compare the computed data to those obtained from experiments, analytical approaches, and other computational methods.

  7. A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons

    NASA Astrophysics Data System (ADS)

    Bao, J.; Lin, Z.; Lu, Z. X.

    2018-02-01

    A conservative scheme has been formulated and verified for gyrokinetic particle simulations of electromagnetic waves and instabilities in magnetized plasmas. An electron continuity equation derived from the drift kinetic equation is used to time advance the electron density perturbation by using the perturbed mechanical flow calculated from the parallel vector potential, and the parallel vector potential is solved by using the perturbed canonical flow from the perturbed distribution function. In gyrokinetic particle simulations using this new scheme, the shear Alfvén wave dispersion relation in the shearless slab and continuum damping in the sheared cylinder have been recovered. The new scheme overcomes the stringent requirement in the conventional perturbative simulation method that perpendicular grid size needs to be as small as electron collisionless skin depth even for the long wavelength Alfvén waves. The new scheme also avoids the problem in the conventional method that an unphysically large parallel electric field arises due to the inconsistency between electrostatic potential calculated from the perturbed density and vector potential calculated from the perturbed canonical flow. Finally, the gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have superior numerical properties compared with the fluid simulations, which suffer from numerical difficulties associated with singular mode structures.

  8. Swarming behavior of gradient-responsive Brownian particles in a porous medium.

    PubMed

    Grančič, Peter; Štěpánek, František

    2012-07-01

    Active targeting by Brownian particles in a fluid-filled porous environment is investigated by computer simulation. The random motion of the particles is enhanced by diffusiophoresis with respect to concentration gradients of chemical signals released by the particles in the proximity of a target. The mathematical model, based on a combination of the Brownian dynamics method and a diffusion problem is formulated in terms of key parameters that include the particle diffusiophoretic mobility and the signaling threshold (the distance from the target at which the particles release their chemical signals). The results demonstrate that even a relatively simple chemical signaling scheme can lead to a complex collective behavior of the particles and can be a very efficient way of guiding a swarm of Brownian particles towards a target, similarly to the way colonies of living cells communicate via secondary messengers.

  9. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  10. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  11. Comparison of continuum and particle simulations of expanding rarefied flows

    NASA Technical Reports Server (NTRS)

    Lumpkin, Forrest E., III; Boyd, Iain D.; Venkatapathy, Ethiraj

    1993-01-01

    Comparisons of Navier-Stokes solutions and particle simulations for a simple two-dimensional model problem at a succession of altitudes are performed in order to assess the importance of rarefaction effects on the base flow region. In addition, an attempt is made to include 'Burnett-type' extensions to the Navier-Stokes constitutive relations. The model geometry consists of a simple blunted wedge with a 0.425 meter nose radius, a 70 deg cone half angle, a 1.7 meter base length, and a rounded shoulder. The working gas is monatomic with a molecular weight and viscosity similar to air and was chosen to focus the study on the continuum and particle methodologies rather than the implementation of thermo-chemical modeling. Three cases are investigated, all at Mach 29, with densities corresponding to altitudes of 92 km, 99 km, and 105 km. At the lowest altitude, Navier-Stokes solutions agree well with particle simulations. At the higher altitudes, the Navier-Stokes equations become less accurate. In particular, the Navier-Stokes equations and particle method predict substantially different flow turning angle in the wake near the after body. Attempts to achieve steady continuum solutions including 'Burnett-type' terms failed. Further research is required to determine whether the boundary conditions, the equations themselves, or other unknown causes led to this failure.

  12. Large-scale particle acceleration by magnetic reconnection during solar flares

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.

    2017-12-01

    Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.

  13. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  14. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  15. GPU Accelerated DG-FDF Large Eddy Simulator

    NASA Astrophysics Data System (ADS)

    Inkarbekov, Medet; Aitzhan, Aidyn; Sammak, Shervin; Givi, Peyman; Kaltayev, Aidarkhan

    2017-11-01

    A GPU accelerated simulator is developed and implemented for large eddy simulation (LES) of turbulent flows. The filtered density function (FDF) is utilized for modeling of the subgrid scale quantities. The filtered transport equations are solved via a discontinuous Galerkin (DG) and the FDF is simulated via particle based Lagrangian Monte-Carlo (MC) method. It is demonstrated that the GPUs simulations are of the order of 100 times faster than the CPU-based calculations. This brings LES of turbulent flows to a new level, facilitating efficient simulation of more complex problems. The work at Al-Faraby Kazakh National University is sponsored by MoES of RK under Grant 3298/GF-4.

  16. Study of the L-mode tokamak plasma “shortfall” with local and global nonlinear gyrokinetic δf particle-in-cell simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, J.; Wan, Weigang; Chen, Yang

    2014-11-15

    The δ f particle-in-cell code GEM is used to study the transport “shortfall” problem of gyrokinetic simulations. In local simulations, the GEM results confirm the previously reported simulation results of DIII-D [Holland et al., Phys. Plasmas 16, 052301 (2009)] and Alcator C-Mod [Howard et al., Nucl. Fusion 53, 123011 (2013)] tokamaks with the continuum code GYRO. Namely, for DIII-D the simulations closely predict the ion heat flux at the core, while substantially underpredict transport towards the edge; while for Alcator C-Mod, the simulations show agreement with the experimental values of ion heat flux, at least within the range of experimental error.more » Global simulations are carried out for DIII-D L-mode plasmas to study the effect of edge turbulence on the outer core ion heat transport. The edge turbulence enhances the outer core ion heat transport through turbulence spreading. However, this edge turbulence spreading effect is not enough to explain the transport underprediction.« less

  17. Direct numerical simulation of gas-solid-liquid flows with capillary effects: An application to liquid bridge forces between spherical particles.

    PubMed

    Sun, Xiaosong; Sakai, Mikio

    2016-12-01

    In this study, a numerical method is developed to perform the direct numerical simulation (DNS) of gas-solid-liquid flows involving capillary effects. The volume-of-fluid method employed to track the free surface and the immersed boundary method is adopted for the fluid-particle coupling in three-phase flows. This numerical method is able to fully resolve the hydrodynamic force and capillary force as well as the particle motions arising from complicated gas-solid-liquid interactions. We present its application to liquid bridges among spherical particles in this paper. By using the DNS method, we obtain the static bridge force as a function of the liquid volume, contact angle, and separation distance. The results from the DNS are compared with theoretical equations and other solutions to examine its validity and suitability for modeling capillary bridges. Particularly, the nontrivial liquid bridges formed in triangular and tetrahedral particle clusters are calculated and some preliminary results are reported. We also perform dynamic simulations of liquid bridge ruptures subject to axial stretching and particle motions driven by liquid bridge action, for which accurate predictions are obtained with respect to the critical rupture distance and the equilibrium particle position, respectively. As shown through the simulations, the strength of the present method is the ability to predict the liquid bridge problem under general conditions, from which models of liquid bridge actions may be constructed without limitations. Therefore, it is believed that this DNS method can be a useful tool to improve the understanding and modeling of liquid bridges formed in complex gas-solid-liquid flows.

  18. Simulation technique for slurries interacting with moving parts and deformable solids with applications

    NASA Astrophysics Data System (ADS)

    Mutabaruka, Patrick; Kamrin, Ken

    2018-04-01

    A numerical method for particle-laden fluids interacting with a deformable solid domain and mobile rigid parts is proposed and implemented in a full engineering system. The fluid domain is modeled with a lattice Boltzmann representation, the particles and rigid parts are modeled with a discrete element representation, and the deformable solid domain is modeled using a Lagrangian mesh. The main issue of this work, since separately each of these methods is a mature tool, is to develop coupling and model-reduction approaches in order to efficiently simulate coupled problems of this nature, as in various geological and engineering applications. The lattice Boltzmann method incorporates a large eddy simulation technique using the Smagorinsky turbulence model. The discrete element method incorporates spherical and polyhedral particles for stiff contact interactions. A neo-Hookean hyperelastic model is used for the deformable solid. We provide a detailed description of how to couple the three solvers within a unified algorithm. The technique we propose for rubber modeling/coupling exploits a simplification that prevents having to solve a finite-element problem at each time step. We also developed a technique to reduce the domain size of the full system by replacing certain zones with quasi-analytic solutions, which act as effective boundary conditions for the lattice Boltzmann method. The major ingredients of the routine are separately validated. To demonstrate the coupled method in full, we simulate slurry flows in two kinds of piston valve geometries. The dynamics of the valve and slurry are studied and reported over a large range of input parameters.

  19. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  20. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    NASA Astrophysics Data System (ADS)

    Baushev, A. N.; del Valle, L.; Campusano, L. E.; Escala, A.; Muñoz, R. R.; Palma, G. A.

    2017-05-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a `core-cusp problem' can properly be used to question the validity of the CDM model.

  1. Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baushev, A.N.; Valle, L. del; Campusano, L.E.

    2017-05-01

    Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals ofmore » motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.« less

  2. Discrete Particle Swarm Optimization Routing Protocol for Wireless Sensor Networks with Multiple Mobile Sinks

    PubMed Central

    Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming

    2016-01-01

    Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle’s position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971

  3. Cosmological neutrino simulations at extreme scale

    DOE PAGES

    Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...

    2017-08-01

    Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less

  4. Shape Universality Classes in the Random Sequential Adsorption of Nonspherical Particles

    NASA Astrophysics Data System (ADS)

    Baule, Adrian

    2017-07-01

    Random sequential adsorption (RSA) of particles of a particular shape is used in a large variety of contexts to model particle aggregation and jamming. A key feature of these models is the observed algebraic time dependence of the asymptotic jamming coverage ˜t-ν as t →∞ . However, the exact value of the exponent ν is not known apart from the simplest case of the RSA of monodisperse spheres adsorbed on a line (Renyi's seminal "car parking problem"), where ν =1 can be derived analytically. Empirical simulation studies have conjectured on a case-by-case basis that for general nonspherical particles, ν =1 /(d +d ˜ ), where d denotes the dimension of the domain, and d ˜ the number of orientational degrees of freedom of a particle. Here, we solve this long-standing problem analytically for the d =1 case—the "Paris car parking problem." We prove, in particular, that the scaling exponent depends on the particle shape, contrary to the original conjecture and, remarkably, falls into two universality classes: (i) ν =1 /(1 +d ˜ /2 ) for shapes with a smooth contact distance, e.g., ellipsoids, and (ii) ν =1 /(1 +d ˜ ) for shapes with a singular contact distance, e.g., spherocylinders and polyhedra. The exact solution explains, in particular, why many empirically observed scalings fall in between these two limits.

  5. Hydrodynamic Simulations of Giant Impacts

    NASA Astrophysics Data System (ADS)

    Reinhardt, Christian; Stadel, Joachim

    2013-07-01

    We studied the basic numerical aspects of giant impacts using Smoothed Particles Hydrodynamics (SPH), which has been used in most of the prior studies conducted in this area (e.g., Benz, Canup). Our main goal was to modify the massive parallel, multi-stepping code GASOLINE widely used in cosmological simulations so that it can properly simulate the behavior of condensed materials such as granite or iron using the Tillotson equation of state. GASOLINE has been used to simulate hundreds of millions of particles for ideal gas physics so that using several millions of particles in condensed material simulations seems possible. In order to focus our attention of the numerical aspects of the problem we neglected the internal structure of the protoplanets and modelled them as homogenous (isothermal) granite spheres. For the energy balance we only considered PdV work and shock heating of the material during the impact (neglected cooling of the material). Starting at a low resolution of 2048 particles for the target and the impactor we run several simulations for different impact parameters and impact velocities and successfully reproduced the main features of the pioneering work of Benz from 1986. The impact sends a shock wave through both bodies heating the target and disrupting the remaining impactor. As in prior simulations material is ejected from the collision. How much, and whether it leaves the system or survives in an orbit for a longer time, depends on the initial conditions but also on resolution. Increasing the resolution (to 1.2x10⁶ particles) results in both a much clearer shock wave and deformation of the bodies during the impact and a more compact and detailed "arm" like structure of the ejected material. Currently we are investigating some numerical issues we encountered and are implementing differentiated models, making one step closer to more realistic protoplanets in such giant impact simulations.

  6. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.

  7. Space Flows and Disturbances Due to Bodies in Motion Through the Magnetoplasma

    NASA Astrophysics Data System (ADS)

    Ponomarjov, Maxim G.

    2000-10-01

    In this paper a method is concerned which makes it possible to describe numerically and analytically the most famous structures in the non-equilibrium ionosphere, such as stratified and yacht sail like structures, flute jets, wakes and clouds. These problems are of practical interest in space sciences, astrophysics and in turbulence theory, and also of fundamental interest since they enable one to concentrate on the effects of the ambient electric and magnetic fields. Disturbances of charged particle flows due to the ambient flow interactions with bodies are simulated with taking into account the ambient magnetic field effect. The effects of interactions between solid surfaces and the flows was simulated by making use of an original image method. The flow disturbances were described by the Boltzmann equation. In the case of the ambient homogeneous magnetic field the Boltzmann equation is solved analytically. The case of diffuse reflection of particles by surface is considered in detail. The disturbances of charged particle concentration are calculated in 3D space. The contours of constant particle concentration obtained from numerical simulations illustrate the dynamics of developing stratifications and flute structures in charged particle jets and wakes under the ambient magnetic field effect. The basic goal of this paper is to present the method and to demonstate its possibility for simulations of turbulence, plasma jets, wakes and clouds in the ionosphere and Space when effects of electric and magnetic fields are taken into account.

  8. On the simulation of indistinguishable fermions in the many-body Wigner formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.

    2015-01-01

    The simulation of quantum systems consisting of interacting, indistinguishable fermions is an incredible mathematical problem which poses formidable numerical challenges. Many sophisticated methods addressing this problem are available which are based on the many-body Schrödinger formalism. Recently a Monte Carlo technique for the resolution of the many-body Wigner equation has been introduced and successfully applied to the simulation of distinguishable, spinless particles. This numerical approach presents several advantages over other methods. Indeed, it is based on an intuitive formalism in which quantum systems are described in terms of a quasi-distribution function, and highly scalable due to its Monte Carlo nature.more » In this work, we extend the many-body Wigner Monte Carlo method to the simulation of indistinguishable fermions. To this end, we first show how fermions are incorporated into the Wigner formalism. Then we demonstrate that the Pauli exclusion principle is intrinsic to the formalism. As a matter of fact, a numerical simulation of two strongly interacting fermions (electrons) is performed which clearly shows the appearance of a Fermi (or exchange–correlation) hole in the phase-space, a clear signature of the presence of the Pauli principle. To conclude, we simulate 4, 8 and 16 non-interacting fermions, isolated in a closed box, and show that, as the number of fermions increases, we gradually recover the Fermi–Dirac statistics, a clear proof of the reliability of our proposed method for the treatment of indistinguishable particles.« less

  9. A contact algorithm for shell problems via Delaunay-based meshing of the contact domain

    NASA Astrophysics Data System (ADS)

    Kamran, K.; Rossi, R.; Oñate, E.

    2013-07-01

    The simulation of the contact within shells, with all of its different facets, represents still an open challenge in Computational Mechanics. Despite the effort spent in the development of techniques for the simulation of general contact problems, an all-seasons algorithm applicable to complex shell contact problems is yet to be developed. This work focuses on the solution of the contact between thin shells by using a technique derived from the particle finite element method together with a rotation-free shell triangle. The key concept is to define a discretization of the contact domain (CD) by constructing a finite element mesh of four-noded tetrahedra that describes the potential contact volume. The problem is completed by using an assumed-strain approach to define an elastic contact strain over the CD.

  10. Pushing particles in extreme fields

    NASA Astrophysics Data System (ADS)

    Gordon, Daniel F.; Hafizi, Bahman; Palastro, John

    2017-03-01

    The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.

  11. The Marriage of Gas and Dust

    NASA Astrophysics Data System (ADS)

    Price, D. J.; Laibe, G.

    2015-10-01

    Dust-gas mixtures are the simplest example of a two fluid mixture. We show that when simulating such mixtures with particles or with particles coupled to grids a problem arises due to the need to resolve a very small length scale when the coupling is strong. Since this is occurs in the limit when the fluids are well coupled, we show how the dust-gas equations can be reformulated to describe a single fluid mixture. The equations are similar to the usual fluid equations supplemented by a diffusion equation for the dust-to-gas ratio or alternatively the dust fraction. This solves a number of numerical problems as well as making the physics clear.

  12. Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method

    NASA Technical Reports Server (NTRS)

    Winckelmans, G. S.

    1995-01-01

    This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.

  13. Ultrahigh-order Maxwell solver with extreme scalability for electromagnetic PIC simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri; Vay, Jean-Luc

    2018-07-01

    The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.

  14. DEM Solutions Develops Answers to Modeling Lunar Dust and Regolith

    NASA Technical Reports Server (NTRS)

    Dunn, Carol Anne; Calle, Carlos; LaRoche, Richard D.

    2010-01-01

    With the proposed return to the Moon, scientists like NASA-KSC's Dr. Calle are concerned for a number of reasons. We will be staying longer on the planet's surface, future missions may include dust-raising activities, such as excavation and handling of lunar soil and rock, and we will be sending robotic instruments to do much of the work for us. Understanding more about the chemical and physical properties of lunar dust, how dust particles interact with each other and with equipment surfaces and the role of static electricity build-up on dust particles in the low-humidity lunar environment is imperative to the development of technologies for removing and preventing dust accumulation, and successfully handling lunar regolith. Dr. Calle is currently working on the problems of the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces, particularly to those of Mars and the Moon, and is heavily involved in developing instrumentation for future planetary missions. With this end in view, the NASA Kennedy Space Center's Innovative Partnerships Program Office partnered with OEM Solutions, Inc. OEM Solutions is a global leader in particle dynamics simulation software, providing custom solutions for use in tackling tough design and process problems related to bulk solids handling. Customers in industries such as pharmaceutical, chemical, mineral, and materials processing as well as oil and gas production, agricultural and construction, and geo-technical engineering use OEM Solutions' EDEM(TradeMark) software to improve the design and operation of their equipment while reducing development costs, time-to-market and operational risk. EDEM is the world's first general-purpose computer-aided engineering (CAE) tool to use state-of-the-art discrete element modeling technology for the simulation and analysis of particle handling and manufacturing operations. With EDEM you'can quickly and easily create a parameterized model of your granular solids system. Computer-aided design (CAD) models of real particles can be imported to obtain an accurate representation of their shape. EDEM(TradeMark) uses particle-scale behavior models to simulate bulk solids behavior. In addition to particle size and shape, the models can account for physical properties of particles along with interaction between particles and with equipment surfaces and surrounding media, as needed to define the physics of a particular process.

  15. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    NASA Astrophysics Data System (ADS)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  16. Design and multi-physics optimization of rotary MRF brakes

    NASA Astrophysics Data System (ADS)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Kathrin, E-mail: k.mueller@fz-juelich.de; Fedosov, Dmitry A., E-mail: d.fedosov@fz-juelich.de; Gompper, Gerhard, E-mail: g.gompper@fz-juelich.de

    Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPDmore » formulation (SDPD+a) is directly derived from the Navier–Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor–Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.« less

  18. Statistics of Magnetic Reconnection X-Lines in Kinetic Turbulence

    NASA Astrophysics Data System (ADS)

    Haggerty, C. C.; Parashar, T.; Matthaeus, W. H.; Shay, M. A.; Wan, M.; Servidio, S.; Wu, P.

    2016-12-01

    In this work we examine the statistics of magnetic reconnection (x-lines) and their associated reconnection rates in intermittent current sheets generated in turbulent plasmas. Although such statistics have been studied previously for fluid simulations (e.g. [1]), they have not yet been generalized to fully kinetic particle-in-cell (PIC) simulations. A significant problem with PIC simulations, however, is electrostatic fluctuations generated due to numerical particle counting statistics. We find that analyzing gradients of the magnetic vector potential from the raw PIC field data identifies numerous artificial (or non-physical) x-points. Using small Orszag-Tang vortex PIC simulations, we analyze x-line identification and show that these artificial x-lines can be removed using sub-Debye length filtering of the data. We examine how turbulent properties such as the magnetic spectrum and scale dependent kurtosis are affected by particle noise and sub-Debye length filtering. We subsequently apply these analysis methods to a large scale kinetic PIC turbulent simulation. Consistent with previous fluid models, we find a range of normalized reconnection rates as large as ½ but with the bulk of the rates being approximately less than to 0.1. [1] Servidio, S., W. H. Matthaeus, M. A. Shay, P. A. Cassak, and P. Dmitruk (2009), Magnetic reconnection and two-dimensional magnetohydrodynamic turbulence, Phys. Rev. Lett., 102, 115003.

  19. Numerical Analysis of Dusty-Gas Flows

    NASA Astrophysics Data System (ADS)

    Saito, T.

    2002-02-01

    This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.

  20. Hopper Flow: Experiments and Simulation

    NASA Astrophysics Data System (ADS)

    Li, Zhusong; Shattuck, Mark

    2013-03-01

    Jamming and intermittent granular flow are important problems in industry, and the vertical hopper is a canonical example. Clogging of granular hoppers account for significant losses across many industries. We use realistic DEM simulations of gravity driven flow in a hopper to examine flow and jamming of 2D disks and compare with identical companion experiments. We use experimental data to validate simulation parameters and the form of the inter particle force law. We measure and compare flow rate, emptying times, jamming statistics, and flow fields as a function of opening angle and opening size in both experiment and simulations. Suppored by: NSF-CBET-0968013

  1. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  2. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  3. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  4. Non-local transport in turbulent MHD convection

    NASA Technical Reports Server (NTRS)

    Miesch, Mark; Brandenburg, Axel; Zweibel, Ellen; Toomre, Juri

    1995-01-01

    The nonlocal non-diffusive transport of passive scalars in turbulent magnetohydrodynamic (MHD) convection is investigated using transilient matrices. These matrices describe the probability that a tracer particle beginning at one position in a flow will be advected to another position after some time. A method for the calculation of these matrices from simulation data which involves following the trajectories of passive tracer particles and calculating their transport statistics, is presented. The method is applied to study the transport in several simulations of turbulent, rotating, three dimensional compressible, penetrative MDH convection. Transport coefficients and other diagnostics are used to quantify the transport, which is found to resemble advection more closely than diffusion. Some of the results are found to have direct relevance to other physical problems, such as the light element depletion in sun-type stars. The large kurtosis found for downward moving particles at the base of the convection zone implies several extreme events.

  5. Multirate Particle-in-Cell Time Integration Techniques of Vlasov-Maxwell Equations for Collisionless Kinetic Plasma Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Knoll, Dana Alan

    2015-07-31

    A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less

  6. A technique to remove the tensile instability in weakly compressible SPH

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Yu, Peng

    2018-01-01

    When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehta, Y.; Neal, C.; Salari, K.

    Propagation of a strong shock through a bed of particles results in complex wave dynamics such as a reflected shock, a transmitted shock, and highly unsteady flow inside the particle bed. In this paper we present three-dimensional numerical simulations of shock propagation in air over a random bed of particles. We assume the flow is inviscid and governed by the Euler equations of gas dynamics. Simulations are carried out by varying the volume fraction of the particle bed at a fixed shock Mach number. We compute the unsteady inviscid streamwise and transverse drag coefficients as a function of time formore » each particle in the random bed as a function of volume fraction. We show that (i) there are significant variations in the peak drag for the particles in the bed, (ii) the mean peak drag as a function of streamwise distance through the bed decreases with a slope that increases as the volume fraction increases, and (iii) the deviation from the mean peak drag does not correlate with local volume fraction. We also present the local Mach number and pressure contours for the different volume fractions to explain the various observed complex physical mechanisms occurring during the shock-particle interactions. Since the shock interaction with the random bed of particles leads to transmitted and reflected waves, we compute the average flow properties to characterize the strength of the transmitted and reflected shock waves and quantify the energy dissipation inside the particle bed. Finally, to better understand the complex wave dynamics in a random bed, we consider a simpler approximation of a planar shock propagating in a duct with a sudden area change. We obtain Riemann solutions to this problem, which are used to compare with fully resolved numerical simulations.« less

  8. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  9. Gas dynamic and force effects of a solid particle in a shock wave in air

    NASA Astrophysics Data System (ADS)

    Obruchkova, L. R.; Baldina, E. G.; Efremov, V. P.

    2017-03-01

    Shock wave interaction with an adiabatic solid microparticle is numerically simulated. In the simulation, the shock wave is initiated by the Riemann problem with instantaneous removal of a diaphragm between the high- and low-pressure chambers. The calculation is performed in the two-dimensional formulation using the ideal gas equation of state. The left end of the tube is impermeable, while outflow from the right end is permitted. The particle is assumed to be motionless, impermeable, and adiabatic, and the simulation is performed for time intervals shorted than the time of velocity and temperature relaxation of the particle. The numerical grid is chosen for each particle size to ensure convergence. For each particle size, the calculated hydraulic resistance coefficient describing the particle force impact on the flow is compared with that obtained from the analytical Stokes formula. It is discovered that the Stokes formula can be used for calculation of hydraulic resistance of a motionless particle in a shock wave flow. The influence of the particle diameter on the flow perturbation behind the shock front is studied. Specific heating of the flow in front of the particle is calculated and a simple estimate is proposed. The whole heated region is divided by the acoustic line into the subsonic and supersonic regions. It is demonstrated that the main heat generated by the particle in the flow is concentrated in the subsonic region. The calculations are performed using two different 2D hydro codes. The energy release in the flow induced by the particle is compared with the maximum possible heating at complete termination of the flow. The results can be used for estimating the possibility of gas ignition in front of the particle by a shock wave whose amplitude is insufficient for initiating detonation in the absence of a particle.

  10. Theory of activated glassy dynamics in randomly pinned fluids.

    PubMed

    Phan, Anh D; Schweizer, Kenneth S

    2018-02-07

    We generalize the force-level, microscopic, Nonlinear Langevin Equation (NLE) theory and its elastically collective generalization [elastically collective nonlinear Langevin equation (ECNLE) theory] of activated dynamics in bulk spherical particle liquids to address the influence of random particle pinning on structural relaxation. The simplest neutral confinement model is analyzed for hard spheres where there is no change of the equilibrium pair structure upon particle pinning. As the pinned fraction grows, cage scale dynamical constraints are intensified in a manner that increases with density. This results in the mobile particles becoming more transiently localized, with increases of the jump distance, cage scale barrier, and NLE theory mean hopping time; subtle changes of the dynamic shear modulus are predicted. The results are contrasted with recent simulations. Similarities in relaxation behavior are identified in the dynamic precursor regime, including a roughly exponential, or weakly supra-exponential, growth of the alpha time with pinning fraction and a reduction of dynamic fragility. However, the increase of the alpha time with pinning predicted by the local NLE theory is too small and severely so at very high volume fractions. The strong deviations are argued to be due to the longer range collective elasticity aspect of the problem which is expected to be modified by random pinning in a complex manner. A qualitative physical scenario is offered for how the three distinct aspects that quantify the elastic barrier may change with pinning. ECNLE theory calculations of the alpha time are then presented based on the simplest effective-medium-like treatment for how random pinning modifies the elastic barrier. The results appear to be consistent with most, but not all, trends seen in recent simulations. Key open problems are discussed with regard to both theory and simulation.

  11. Theory of activated glassy dynamics in randomly pinned fluids

    NASA Astrophysics Data System (ADS)

    Phan, Anh D.; Schweizer, Kenneth S.

    2018-02-01

    We generalize the force-level, microscopic, Nonlinear Langevin Equation (NLE) theory and its elastically collective generalization [elastically collective nonlinear Langevin equation (ECNLE) theory] of activated dynamics in bulk spherical particle liquids to address the influence of random particle pinning on structural relaxation. The simplest neutral confinement model is analyzed for hard spheres where there is no change of the equilibrium pair structure upon particle pinning. As the pinned fraction grows, cage scale dynamical constraints are intensified in a manner that increases with density. This results in the mobile particles becoming more transiently localized, with increases of the jump distance, cage scale barrier, and NLE theory mean hopping time; subtle changes of the dynamic shear modulus are predicted. The results are contrasted with recent simulations. Similarities in relaxation behavior are identified in the dynamic precursor regime, including a roughly exponential, or weakly supra-exponential, growth of the alpha time with pinning fraction and a reduction of dynamic fragility. However, the increase of the alpha time with pinning predicted by the local NLE theory is too small and severely so at very high volume fractions. The strong deviations are argued to be due to the longer range collective elasticity aspect of the problem which is expected to be modified by random pinning in a complex manner. A qualitative physical scenario is offered for how the three distinct aspects that quantify the elastic barrier may change with pinning. ECNLE theory calculations of the alpha time are then presented based on the simplest effective-medium-like treatment for how random pinning modifies the elastic barrier. The results appear to be consistent with most, but not all, trends seen in recent simulations. Key open problems are discussed with regard to both theory and simulation.

  12. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  13. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  14. Structure of the disturbed region of the atmosphere after the nuclear explosion in Hiroshima

    NASA Astrophysics Data System (ADS)

    Shcherbin, M. D.; Pavlyukov, K. V.; Salo, A. A.; Pertsev, S. F.; Rikunov, A. V.

    2013-09-01

    An attempt is undertaken to describe the development of the disturbed region of the atmosphere caused by the nuclear explosion over Hiroshima on August 6, 1945. Numerical simulation of the phenomenon is performed using the dynamic equations for a nonconducting inviscid gas taking into account the combustion of urban buildings, phase changes of water, electrification of ice particles, and removal of soot particles. The results of the numerical calculation of the development of the disturbed region indicate heavy rainfall, the formation of a storm cloud with lightning discharges, removal of soot particles, and the formation of vertical vortices. The temporal sequence of these meteorological phenomena is consistent with the data of observations. Because of the assumptions and approximations used in solving the problem, the results are of qualitative nature. Refinement of the results can be obtained by a more detailed study of the approximate initial and boundary conditions of the problem.

  15. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  16. A three-dimensional electrostatic particle-in-cell methodology on unstructured Delaunay-Voronoi grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gatsonis, Nikolaos A.; Spirkin, Anton

    2009-06-01

    The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error andmore » sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.« less

  17. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang

    2015-12-15

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less

  18. Mixing, segregation, and flow of granular materials

    NASA Astrophysics Data System (ADS)

    McCarthy, Joseph J.

    1998-11-01

    This dissertation addresses mixing, segregation, and flow of granular materials with the ultimate goal of providing fundamental understanding and tools for the rational design and optimization of mixing devices. In particular, the paradigm cases of a slowly rotated tumbler mixer and flow down an inclined plane are examined. Computational work, as well as supporting experiments, are used to probe both two and three dimensional systems. In the avalanching regime, the mixing and flow can be viewed either on a global-scale or a local-scale. On the global-scale, material is transported via avalanches whose gross motion can be well described by geometrical considerations. On the local-scale, the dynamics of the particle motion becomes important; particles follow complicated trajectories that are highly sensitive to differences in size/density/morphology. By decomposing the problem in this way, it is possible to study the implications of the geometry and dynamics separately and to add complexities in a controlled fashion. This methodology allows even seemingly difficult problems (i.e., mixing in non-convex geometries, and mixing of dissimilar particles) to be probed in a simple yet methodical way. In addition this technique provides predictions of optimal mixing conditions in an avalanching tumbler, a criterion for evaluating the effect of mixer shape, and mixing enhancement strategies for both two and three dimensional mixers. In the continuous regime, the flow can be divided into two regions: a rapid flow region of the cascading layer at the free surface, and a fixed bed region undergoing solid body rotation. A continuum-based description, in which averages are taken across the layer, generates quantitative predictions about the flow in the cascading layer and agrees well with experiment. Incorporating mixing through a diffusive flux (as well as constitutive expression for segregation) within the cascading layer allows for the determination of optimal mixing conditions. Segregation requires a detailed understanding of the interplay between the flow and the properties of the particles. A relatively mature simulation technique, particle dynamics (PD), aptly captures these effects and is eminently suited to mixing studies; particle properties can be varied on a particle-by-particle basis and detailed mixed structures are easily captured and visualized. However, PD is computationally intensive and is therefore of questionable general utility. By combining PD and geometrical insight-in essence, by focusing the particle dynamics simulation only where it is needed-a new hybrid method of simulation, which is much faster than a conventional particle dynamics method, can be achieved. This technique can yield more than an order of magnitude increase in computational speed while maintaining the versatility of a particle dynamics simulation. Alternatively, by utilizing PD to explore segregation mechanisms in simple flows-e.g., flow down an inclined plane-heuristic models and constitutive relations for segregation can be tested. Incorporating these segregation flux terms into a continuum description of the flow in a tumbler allows rapid Lagrangian simulation of the competition between mixing and segregation. For the case of density segregation, this produces good agreement between theory and experiment with essentially no adjustable parameters. In addition, an accurate quantitative prediction of the optimal mixing time is obtained.

  19. An improved random walk algorithm for the implicit Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.

    In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities aremore » a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.« less

  20. Simulations of the plasma dynamics in high-current ion diodes

    NASA Astrophysics Data System (ADS)

    Boine-Frankenheim, O.; Pointon, T. D.; Mehlhorn, T. A.

    Our time-implicit fluid/Particle-In-Cell (PIC) code DYNAID [1]is applied to problems relevant for applied- B ion diode operation. We present simulations of the laser ion source, which will soon be employed on the SABRE accelerator at SNL, and of the dynamics of the anode source plasma in the applied electric and magnetic fields. DYNAID is still a test-bed for a higher-dimensional simulation code. Nevertheless, the code can already give new theoretical insight into the dynamics of plasmas in pulsed power devices.

  1. Nonequilibrium flows with smooth particle applied mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kum, Oyeon

    1995-07-01

    Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separatelymore » controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.« less

  2. A particle finite element method for machining simulations

    NASA Astrophysics Data System (ADS)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  3. Lagrangian transported MDF methods for compressible high speed flows

    NASA Astrophysics Data System (ADS)

    Gerlinger, Peter

    2017-06-01

    This paper deals with the application of thermochemical Lagrangian MDF (mass density function) methods for compressible sub- and supersonic RANS (Reynolds Averaged Navier-Stokes) simulations. A new approach to treat molecular transport is presented. This technique on the one hand ensures numerical stability of the particle solver in laminar regions of the flow field (e.g. in the viscous sublayer) and on the other hand takes differential diffusion into account. It is shown in a detailed analysis, that the new method correctly predicts first and second-order moments on the basis of conventional modeling approaches. Moreover, a number of challenges for MDF particle methods in high speed flows is discussed, e.g. high cell aspect ratio grids close to solid walls, wall heat transfer, shock resolution, and problems from statistical noise which may cause artificial shock systems in supersonic flows. A Mach 2 supersonic mixing channel with multiple shock reflection and a model rocket combustor simulation demonstrate the eligibility of this technique to practical applications. Both test cases are simulated successfully for the first time with a hybrid finite-volume (FV)/Lagrangian particle solver (PS).

  4. Zonal methods for the parallel execution of range-limited N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.

    2007-01-20

    Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less

  5. Optical Extinction Measurements of Dust Density in the GMRO Regolith Test Bin

    NASA Technical Reports Server (NTRS)

    Lane, J.; Mantovani, J.; Mueller, R.; Nugent, M.; Nick, A.; Schuler, J.; Townsend, I.

    2016-01-01

    A regolith simulant test bin was constructed and completed in the Granular Mechanics and Regolith Operations (GMRO) Lab in 2013. This Planetary Regolith Test Bed (PRTB) is a 64 sq m x 1 m deep test bin, is housed in a climate-controlled facility, and contains 120 MT of lunar-regolith simulant, called Black Point-1 or BP-1, from Black Point, AZ. One of the current uses of the test bin is to study the effects of difficult lighting and dust conditions on Telerobotic Perception Systems to better assess and refine regolith operations for asteroid, Mars and polar lunar missions. Low illumination and low angle of incidence lighting pose significant problems to computer vision and human perception. Levitated dust on Asteroids interferes with imaging and degrades depth perception. Dust Storms on Mars pose a significant problem. Due to these factors, the likely performance of telerobotics is poorly understood for future missions. Current space telerobotic systems are only operated in bright lighting and dust-free conditions. This technology development testing will identify: (1) the impact of degraded lighting and environmental dust on computer vision and operator perception, (2) potential methods and procedures for mitigating these impacts, (3) requirements for telerobotic perception systems for asteroid capture, Mars dust storms and lunar regolith ISRU missions. In order to solve some of the Telerobotic Perception system problems, a plume erosion sensor (PES) was developed in the Lunar Regolith Simulant Bin (LRSB), containing 2 MT of JSC-1a lunar simulant. PES is simply a laser and digital camera with a white target. Two modes of operation have been investigated: (1) single laser spot - the brightness of the spot is dependent on the optical extinction due to dust and is thus an indirect measure of particle number density, and (2) side-scatter - the camera images the laser from the side, showing beam entrance into the dust cloud and the boundary between dust and void. Both methods must assume a mean particle size in order to extract a number density. The optical extinction measurement yields the product of the 2nd moment of the particle size distribution and the extinction efficiency Qe. For particle sizes in the range of interest (greater than 1 micrometer), Qe approximately equal to 2. Scaling up of the PES single laser and camera system is underway in the PRTB, where an array of lasers penetrate a con-trolled dust cloud, illuminating multiple targets. Using high speed HD GoPro video cameras, the evolution of the dust cloud and particle size density can be studied in detail.

  6. Dredging for dilution: A simulation based case study in a Tidal River.

    PubMed

    Bilgili, Ata; Proehl, Jeffrey A; Swift, M Robinson

    2016-02-01

    A 2-D hydrodynamic finite element model with a Lagrangian particle module is used to investigate the effects of dredging on the hydrodynamics and the horizontal dilution of pollutant particles originating from a wastewater treatment facility (WWTF) in tidal Oyster River in New Hampshire, USA. The model is driven by the semi-diurnal (M2) tidal component and includes the effect of flooding and drying of riverine mud flats. The particle tracking method consists of tidal advection plus a horizontal random walk model of sub-grid scale turbulent processes. Our approach is to perform continuous pollutant particle releases from the outfall, simulating three different scenarios: a base-case representing the present conditions and two different dredged channel/outfall location configurations. Hydrodynamics are investigated in an Eulerian framework and Lagrangian particle dilution improvement ratios are calculated for all cases. Results show that the simulated hydrodynamics are consistent with observed conditions. Eulerian and Lagrangian residuals predict an outward path suggesting flushing of pollutants on longer (>M2) time scales. Simulated dilution maps show that, in addition to dredging, the relocation of the WWTF outfall into the dredged main channel is required for increased dilution performance. The methodology presented here can be applied to similar managerial problems in all similar systems worldwide with relatively little effort, with the combination of Lagrangian and Eulerian methods working together towards a better solution. The statistical significance brought into methodology, by using a large number of particles (16000 in this case), is to be emphasized, especially with the growing number of networked parallel computer clusters worldwide. This paper improves on the study presented in Bilgili et al., 2006b, by adding an Eulerian analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Particle acceleration with anomalous pitch angle scattering in 2D magnetohydrodynamic reconnection simulations

    NASA Astrophysics Data System (ADS)

    Borissov, A.; Kontar, E. P.; Threlfall, J.; Neukirch, T.

    2017-09-01

    The conversion of magnetic energy into other forms (such as plasma heating, bulk plasma flows, and non-thermal particles) during solar flares is one of the outstanding open problems in solar physics. It is generally accepted that magnetic reconnection plays a crucial role in these conversion processes. In order to achieve the rapid energy release required in solar flares, an anomalous resistivity, which is orders of magnitude higher than the Spitzer resistivity, is often used in magnetohydrodynamic (MHD) simulations of reconnection in the corona. The origin of Spitzer resistivity is based on Coulomb scattering, which becomes negligible at the high energies achieved by accelerated particles. As a result, simulations of particle acceleration in reconnection events are often performed in the absence of any interaction between accelerated particles and any background plasma. This need not be the case for scattering associated with anomalous resistivity caused by turbulence within solar flares, as the higher resistivity implies an elevated scattering rate. We present results of test particle calculations, with and without pitch angle scattering, subject to fields derived from MHD simulations of two-dimensional (2D) X-point reconnection. Scattering rates proportional to the ratio of the anomalous resistivity to the local Spitzer resistivity, as well as at fixed values, are considered. Pitch angle scattering, which is independent of the anomalous resistivity, causes higher maximum energies in comparison to those obtained without scattering. Scattering rates which are dependent on the local anomalous resistivity tend to produce fewer highly energised particles due to weaker scattering in the separatrices, even though scattering in the current sheet may be stronger when compared to resistivity-independent scattering. Strong scattering also causes an increase in the number of particles exiting the computational box in the reconnection outflow region, as opposed to along the separatrices as is the case in the absence of scattering.

  8. Monte Carlo calculation of proton stopping power and ranges in water for therapeutic energies

    NASA Astrophysics Data System (ADS)

    Bozkurt, Ahmet

    2017-09-01

    Monte Carlo is a statistical technique for obtaining numerical solutions to physical or mathematical problems that are analytically impractical, if not impossible, to solve. For charged particle transport problems, it presents many advantages over deterministic methods since such problems require a realistic description of the problem geometry, as well as detailed tracking of every source particle. Thus, MC can be considered as a powerful alternative to the well-known Bethe-Bloche equation where an equation with various corrections is used to obtain stopping power and ranges of electrons, positrons, protons, alphas, etc. This study presents how a stochastic method such as MC can be utilized to obtain certain quantities of practical importance related to charged particle transport. Sample simulation geometries were formed for water medium where disk shaped thin detectors were employed to compute average values of absorbed dose and flux at specific distances. For each detector cell, these quantities were utilized to evaluate the values of the range and the stopping power, as well as the shape of Bragg curve, for mono-energetic point source pencil beams of protons. The results were found to be ±2% compared to the data from the NIST compilation. It is safe to conclude that this approach can be extended to determine dosimetric quantities for other media, energies and charged particle types.

  9. The bar-halo interaction - I. From fundamental dynamics to revised N-body requirements

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.; Katz, Neal

    2007-02-01

    A galaxy remains near equilibrium for most of its history. Only through resonances can non-axisymmetric features, such as spiral arms and bars, exert torques over large scales and change the overall structure of the galaxy. In this paper, we describe the resonant interaction mechanism in detail, derive explicit criteria for the particle number required to simulate these dynamical processes accurately using N-body simulations, and illustrate them with numerical experiments. To do this, we perform a direct numerical solution of perturbation theory, in short, by solving for each orbit in an ensemble and make detailed comparisons with N-body simulations. The criteria include: sufficient particle coverage in phase space near the resonance and enough particles to minimize gravitational potential fluctuations that will change the dynamics of the resonant encounter. These criteria are general in concept and can be applied to any dynamical interaction. We use the bar-halo interaction as our primary example owing to its technical simplicity and astronomical ubiquity. Some of our more surprising findings are as follows. First, the inner Lindblad like resonance, responsible for coupling the bar to the central halo cusp, requires more than equal-mass particles within the virial radius or inside the bar radius for a Milky Way like bar in a Navarro, Frenk & White profile. Secondly, orbits that linger near the resonance receive more angular momentum than orbits that move through the resonance quickly. Small-scale fluctuations present in state-of-the-art particle-particle simulations can knock orbits out of resonance, preventing them from lingering and, thereby, decrease the torque per orbit. This can be offset by the larger number of orbits affected by the resonance due to the diffusion. However, noise from orbiting substructure remains at least an order of magnitude too small to be of consequence. Applied to N-body simulations, the required particle numbers are sufficiently high for scenarios of interest that apparent convergence in particle number is misleading: the convergence with N may still be in the noise-dominated regime. State-of-the-art simulations are not adequate to follow all aspects of secular evolution driven by the bar-halo interaction. It is not possible to derive particle number requirements that apply to all situations, for example, more subtle interactions may be even more difficult to simulate. Therefore, we present a procedure to test the requirements for individual N-body codes to the actual problem of interest.

  10. On the modeling of the 2010 Gulf of Mexico Oil Spill

    NASA Astrophysics Data System (ADS)

    Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.

    2011-09-01

    Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.

  11. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  12. High-performance multiprocessor architecture for a 3-D lattice gas model

    NASA Technical Reports Server (NTRS)

    Lee, F.; Flynn, M.; Morf, M.

    1991-01-01

    The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.

  13. Instruction Using Experiments in a Computer. Final Report.

    ERIC Educational Resources Information Center

    Fulton, John P.; Hazeltine, Barrett

    Included are four computer programs which simulate experiments suitable for freshman engineering and physics courses. The subjects of the programs are ballistic trajectories, variable mass systems, trajectory of a particle under various forces, and design of an electronic emplifier. The report includes the problem statement, its objectives, the…

  14. Reconstruction From Multiple Particles for 3D Isotropic Resolution in Fluorescence Microscopy.

    PubMed

    Fortun, Denis; Guichard, Paul; Hamel, Virginie; Sorzano, Carlos Oscar S; Banterle, Niccolo; Gonczy, Pierre; Unser, Michael

    2018-05-01

    The imaging of proteins within macromolecular complexes has been limited by the low axial resolution of optical microscopes. To overcome this problem, we propose a novel computational reconstruction method that yields isotropic resolution in fluorescence imaging. The guiding principle is to reconstruct a single volume from the observations of multiple rotated particles. Our new operational framework detects particles, estimates their orientation, and reconstructs the final volume. The main challenge comes from the absence of initial template and a priori knowledge about the orientations. We formulate the estimation as a blind inverse problem, and propose a block-coordinate stochastic approach to solve the associated non-convex optimization problem. The reconstruction is performed jointly in multiple channels. We demonstrate that our method is able to reconstruct volumes with 3D isotropic resolution on simulated data. We also perform isotropic reconstructions from real experimental data of doubly labeled purified human centrioles. Our approach revealed the precise localization of the centriolar protein Cep63 around the centriole microtubule barrel. Overall, our method offers new perspectives for applications in biology that require the isotropic mapping of proteins within macromolecular assemblies.

  15. Space-charge-sustained microbunch structure in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Cousineau, S.; Danilov, V.; Holmes, J.; Macek, R.

    2004-09-01

    We present experimental data from the Los Alamos Proton Storage Ring (PSR) showing long-lived linac microbunch structure during beam storage with no rf bunching. Analysis of the experimental data and particle-in-cell simulations of the experiments indicate that space charge, coupled with energy spread effects, is responsible for the sustained microbunch structure. The simulated longitudinal phase space of the beam reveals a well-defined separatrix in the phase space between linac microbunches, with particles executing unbounded motion outside of the separatrix. We show that the longitudinal phase space of the beam was near steady state during the PSR experiments, such that the separatrix persisted for long periods of time. Our simulations indicate that the steady state is very sensitive to the experimental conditions. Finally, we solve the steady-state problem in an analytic, self-consistent fashion for a set of periodic longitudinal space-charge potentials.

  16. PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils

    NASA Technical Reports Server (NTRS)

    Johnson, Scott; Walton, Otis; Settgast, Randolph

    2013-01-01

    PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.

  17. Particle acceleration and plasma dynamics during magnetic reconnection in the magnetically dominated regime

    DOE PAGES

    Guo, Fan; Liu, Yi -Hsin; Daughton, William; ...

    2015-06-17

    Magnetic reconnection is thought to be the driver for many explosive phenomena in the universe. The energy release and particle acceleration during reconnection have been proposed as a mechanism for producing high-energy emissions and cosmic rays. We carry out two- and three-dimensional (3D) kinetic simulations to investigate relativistic magnetic reconnection and the associated particle acceleration. The simulations focus on electron–positron plasmas starting with a magnetically dominated, force-free current sheet (σ ≡ B 2 / (4πn em ec 2) >> 1). For this limit, we demonstrate that relativistic reconnection is highly efficient at accelerating particles through a first-order Fermi process accomplishedmore » by the curvature drift of particles along the electric field induced by the relativistic flows. This mechanism gives rise to the formation of hard power-law spectra f α (γ - 1) -p and approaches p = 1 for sufficiently large σ and system size. Eventually most of the available magnetic free energy is converted into nonthermal particle kinetic energy. An analytic model is presented to explain the key results and predict a general condition for the formation of power-law distributions. The development of reconnection in these regimes leads to relativistic inflow and outflow speeds and enhanced reconnection rates relative to nonrelativistic regimes. In the 3D simulation, the interplay between secondary kink and tearing instabilities leads to strong magnetic turbulence, but does not significantly change the energy conversion, reconnection rate, or particle acceleration. This paper suggests that relativistic reconnection sites are strong sources of nonthermal particles, which may have important implications for a variety of high-energy astrophysical problems.« less

  18. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    NASA Astrophysics Data System (ADS)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  19. Software for Processing Flight and Simulated Data of the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Panov, A. D.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Case, G.; Christl, M.; Chang, J.; Fazely, A. R.; Ganel, O.; hide

    2002-01-01

    ATIC (Advanced Thin Ionization Calorimeter) is a balloon borne experiment designed to measure the cosmic ray composition for elements from hydrogen to iron and their energy spectra from approx.50 GeV to near 100 TeV. It consists of a Si-matrix detector to determine the charge of a CR particle, a scintillator hodoscope for tracking, carbon interaction targets and a fully active BGO calorimeter. ATIC had its first flight from McMurdo, Antarctica from 28/12/2000 to 13/01/2001. The ATIC flight collected approximately 25 million events. A C++-class library for building different programs for processing flight and simulated data of the ATIC balloon experiment is described. This library is compatible with the ROOT-system and includes classes and methods for solving a number of problems as the following: Reading data files in different formats (raw-data format, ROOT-format, ASCII-format, different formats for simulated data); Transferring all these formats to the only inner format of the library; Reconstruction of trajectories of primary particles with BGO calorimeter only. The Monte-Carlo simulations with GEANT code were used to obtain the basic tables for computing error corridors and chi(sup 2)-values for the trajectories. Obtaining error corridors for searching for signal of primary particle in the Si-matrix; Searching for hit of primary particle in the Si-matrix with using of error corridor and other criteria (chi(sup 2)-values, agreement between signals in Si-matrix and in the upper layer of scintillator and others); Determination of charge of primary particle; Determination of energy deposit in BGO calorimeter.

  20. A particle-based model to simulate the micromechanics of single-plant parenchyma cells and aggregates

    NASA Astrophysics Data System (ADS)

    Van Liedekerke, P.; Ghysels, P.; Tijskens, E.; Samaey, G.; Smeedts, B.; Roose, D.; Ramon, H.

    2010-06-01

    This paper is concerned with addressing how plant tissue mechanics is related to the micromechanics of cells. To this end, we propose a mesh-free particle method to simulate the mechanics of both individual plant cells (parenchyma) and cell aggregates in response to external stresses. The model considers two important features in the plant cell: (1) the cell protoplasm, the interior liquid phase inducing hydrodynamic phenomena, and (2) the cell wall material, a viscoelastic solid material that contains the protoplasm. In this particle framework, the cell fluid is modeled by smoothed particle hydrodynamics (SPH), a mesh-free method typically used to address problems with gas and fluid dynamics. In the solid phase (cell wall) on the other hand, the particles are connected by pairwise interactions holding them together and preventing the fluid to penetrate the cell wall. The cell wall hydraulic conductivity (permeability) is built in as well through the SPH formulation. Although this model is also meant to be able to deal with dynamic and even violent situations (leading to cell wall rupture or cell-cell debonding), we have concentrated on quasi-static conditions. The results of single-cell compression simulations show that the conclusions found by analytical models and experiments can be reproduced at least qualitatively. Relaxation tests revealed that plant cells have short relaxation times (1 µs-10 µs) compared to mammalian cells. Simulations performed on cell aggregates indicated an influence of the cellular organization to the tissue response, as was also observed in experiments done on tissues with a similar structure.

  1. High-resolution extraction of particle size via Fourier Ptychography

    NASA Astrophysics Data System (ADS)

    Li, Shengfu; Zhao, Yu; Chen, Guanghua; Luo, Zhenxiong; Ye, Yan

    2017-11-01

    This paper proposes a method which can extract the particle size information with a resolution beyond λ/NA. This is achieved by applying Fourier Ptychographic (FP) ideas to the present problem. In a typical FP imaging platform, a 2D LED array is used as light sources for angle-varied illuminations, a series of low-resolution images was taken by a full sequential scan of the array of LEDs. Here, we demonstrate the particle size information is extracted by turning on each single LED on a circle. The simulated results show that the proposed method can reduce the total number of images, without loss of reliability in the results.

  2. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less

  3. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.

  4. Hyper-Systolic Processing on APE100/QUADRICS:. n2-LOOP Computations

    NASA Astrophysics Data System (ADS)

    Lippert, Thomas; Ritzenhöfer, Gero; Glaessner, Uwe; Hoeber, Henning; Seyfried, Armin; Schilling, Klaus

    We investigate the performance gains from hyper-systolic implementations of n2-loop problems on the massively parallel computer Quadrics, exploiting its three-dimensional interprocessor connectivity. For illustration we study the communication aspects of an exact molecular dynamics simulation of n particles with Coulomb (or gravitational) interactions. We compare the interprocessor communication costs of the standard-systolic and the hyper-systolic approaches for various granularities. We predict gain factors as large as three on the Q4 and eight on the QH4 and measure actual performances on these machine configurations. We conclude that it appears feasible to investigate the thermodynamics of a full gravitating n-body problem with O(16.000) particles using the new method on a QH4 system.

  5. Simulation of sediment settling in reduced gravity

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus; Kuhn, Brigitte; Rüegg, Hans-Rudolf; Gartmann, Andres

    2015-04-01

    Gravity has a non-linear effect on the settling velocity of sediment particles in liquids and gases due to the interdependence of settling velocity, drag and friction. However, Stokes' Law or similar empirical models, the common way of estimating the terminal velocity of a particle settling in a gas or liquid, carry the notion of a drag as a property of a particle, rather than a force generated by the flow around the particle. For terrestrial applications, this simplifying assumption is not relevant, but it may strongly influence the terminal velocity achieved by settling particles on other planetary bodies. False estimates of these settling velocities will, in turn, affect the interpretation of particle sizes observed in sedimentary rocks, e.g. on Mars and the search for traces of life. Simulating sediment settling velocities on other planets based on a numeric simulation using Navier-Stokes equations and Computational Fluid Dynamics requires a prohibitive amount of time and lacks measurements to test the quality of the results. The aim of the experiments presented in this study was therefore to quantify the error incurred by using settling velocity models calibrated on Earth at reduced gravities, such as those on the Moon and Mars. In principle, the effect of lower gravity on settling velocity can be achieved by reducing the difference in density between particle and liquid. However, the use of such analogues creates other problems because the properties (i.e. viscosity) and interaction of the liquids and sediment (i.e. flow around the boundary layer between liquid and particle) differ from those of water and mineral particles. An alternative for measuring the actual settling velocities of particles under reduced gravity, on Earth, is offered by placing a settling tube on a reduced gravity flight and conduct settling velocity measurements within the 20 to 25 seconds of Martian gravity that can be simulated during such a flight. In this presentation, the results of the during the MarsSedEx I and II reduced gravity flights are reported, focusing both on the feasibility of experiments in reduced gravity as well as the error incurred when using terrestrial drag coefficients to calculate sediment settling on another planet.

  6. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    NASA Astrophysics Data System (ADS)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on major compilers suggests that this algorithm will be of wider applicability on most upcoming supercomputers.

  7. Cosmological N-body Simulation

    NASA Astrophysics Data System (ADS)

    Lake, George

    1994-05-01

    .90ex> }}} The ``N'' in N-body calculations has doubled every year for the last two decades. To continue this trend, the UW N-body group is working on algorithms for the fast evaluation of gravitational forces on parallel computers and establishing rigorous standards for the computations. In these algorithms, the computational cost per time step is ~ 10(3) pairwise forces per particle. A new adaptive time integrator enables us to perform high quality integrations that are fully temporally and spatially adaptive. SPH--smoothed particle hydrodynamics will be added to simulate the effects of dissipating gas and magnetic fields. The importance of these calculations is two-fold. First, they determine the nonlinear consequences of theories for the structure of the Universe. Second, they are essential for the interpretation of observations. Every galaxy has six coordinates of velocity and position. Observations determine two sky coordinates and a line of sight velocity that bundles universal expansion (distance) together with a random velocity created by the mass distribution. Simulations are needed to determine the underlying structure and masses. The importance of simulations has moved from ex post facto explanation to an integral part of planning large observational programs. I will show why high quality simulations with ``large N'' are essential to accomplish our scientific goals. This year, our simulations have N >~ 10(7) . This is sufficient to tackle some niche problems, but well short of our 5 year goal--simulating The Sloan Digital Sky Survey using a few Billion particles (a Teraflop-year simulation). Extrapolating past trends, we would have to ``wait'' 7 years for this hundred-fold improvement. Like past gains, significant changes in the computational methods are required for these advances. I will describe new algorithms, algorithmic hacks and a dedicated computer to perform Billion particle simulations. Finally, I will describe research that can be enabled by Petaflop computers. This research is supported by the NASA HPCC/ESS program.

  8. Application of particle and lattice codes to simulation of hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Damjanac, Branko; Detournay, Christine; Cundall, Peter A.

    2016-04-01

    With the development of unconventional oil and gas reservoirs over the last 15 years, the understanding and capability to model the propagation of hydraulic fractures in inhomogeneous and naturally fractured reservoirs has become very important for the petroleum industry (but also for some other industries like mining and geothermal). Particle-based models provide advantages over other models and solutions for the simulation of fracturing of rock masses that cannot be assumed to be continuous and homogeneous. It has been demonstrated (Potyondy and Cundall Int J Rock Mech Min Sci Geomech Abstr 41:1329-1364, 2004) that particle models based on a simple force criterion for fracture propagation match theoretical solutions and scale effects derived using the principles of linear elastic fracture mechanics (LEFM). The challenge is how to apply these models effectively (i.e., with acceptable models sizes and computer run times) to the coupled hydro-mechanical problems of relevant time and length scales for practical field applications (i.e., reservoir scale and hours of injection time). A formulation of a fully coupled hydro-mechanical particle-based model and its application to the simulation of hydraulic treatment of unconventional reservoirs are presented. Model validation by comparing with available analytical asymptotic solutions (penny-shape crack) and some examples of field application (e.g., interaction with DFN) are also included.

  9. About improving efficiency of the P3 M algorithms when computing the inter-particle forces in beam dynamics

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2017-03-01

    In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.

  10. GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts

    NASA Astrophysics Data System (ADS)

    Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.

    2007-12-01

    The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.

  11. Low-energy ion acceleration at quasi-perpendicular shocks: Transverse diffusion

    NASA Technical Reports Server (NTRS)

    Giacalone, J.; Jokipii, J. R.

    1995-01-01

    The problem of ion injection and acceleration at quasi perpendicular shocks has been the subject of some debate over the past two decades. It is widely known that these shocks efficiently accelerate particles that are well in the high-energy tail of the distribution. However, the issue of injection, or the acceleration of low-energy ions, has yet to reach a consensus. The fundamental issue is whether there is enough diffusion normal to the magnetic field for the particles to remain near the shock. Since transverse diffusion is a physical process that is not well understood in space plasmas, this is an important, and difficult issue to address. In this report, we will investigate the ion injection problem by performing test particle orbit integrations using synthesized turbulent fields. These fields are fully three-dimensional so that transverse diffusion is possible (cross-field diffusion is not possible in geometries where the electromagnetic fields are less than three dimensional). The synthesized fields are produced by superimposing a three-dimensional wave field on a background field. For completeness, we will compare the results from this model with the more well-established theories, such as the diffusive approximation and scatter-free shock drift acceleration. We will also compare these results with other numerical simulation techniques such as the well known hybrid simulation, and other test-particle calculations in which the shock fields are specified to have less than three dimensions. We will also discuss some recent relevant observations and how these compare with our results.

  12. Species Entropies in the Kinetic Range of Collisionless Plasma Turbulence: Particle-in-cell Simulations

    NASA Astrophysics Data System (ADS)

    Gary, S. Peter; Zhao, Yinjian; Hughes, R. Scott; Wang, Joseph; Parashar, Tulasi N.

    2018-06-01

    Three-dimensional particle-in-cell simulations of the forward cascade of decaying turbulence in the relatively short-wavelength kinetic range have been carried out as initial-value problems on collisionless, homogeneous, magnetized electron-ion plasma models. The simulations have addressed both whistler turbulence at β i = β e = 0.25 and kinetic Alfvén turbulence at β i = β e = 0.50, computing the species energy dissipation rates as well as the increase of the Boltzmann entropies for both ions and electrons as functions of the initial dimensionless fluctuating magnetic field energy density ε o in the range 0 ≤ ε o ≤ 0.50. This study shows that electron and ion entropies display similar rates of increase and that all four entropy rates increase approximately as ε o , consistent with the assumption that the quasilinear premise is valid for the initial conditions assumed for these simulations. The simulations further predict that the time rates of ion entropy increase should be substantially greater for kinetic Alfvén turbulence than for whistler turbulence.

  13. Distributed multi-sensor particle filter for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Jungen; Ji, Hongbing

    2012-02-01

    In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.

  14. Aerosol simulation including chemical and nuclear reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marwil, E.S.; Lemmon, E.C.

    1985-01-01

    The numerical simulation of aerosol transport, including the effects of chemical and nuclear reactions presents a challenging dynamic accounting problem. Particles of different sizes agglomerate and settle out due to various mechanisms, such as diffusion, diffusiophoresis, thermophoresis, gravitational settling, turbulent acceleration, and centrifugal acceleration. Particles also change size, due to the condensation and evaporation of materials on the particle. Heterogeneous chemical reactions occur at the interface between a particle and the suspending medium, or a surface and the gas in the aerosol. Homogeneous chemical reactions occur within the aersol suspending medium, within a particle, and on a surface. These reactionsmore » may include a phase change. Nuclear reactions occur in all locations. These spontaneous transmutations from one element form to another occur at greatly varying rates and may result in phase or chemical changes which complicate the accounting process. This paper presents an approach for inclusion of these effects on the transport of aerosols. The accounting system is very complex and results in a large set of stiff ordinary differential equations (ODEs). The techniques for numerical solution of these ODEs require special attention to achieve their solution in an efficient and affordable manner. 4 refs.« less

  15. Improving the representation of mixed-phase cloud microphysics in the ICON-LEM

    NASA Astrophysics Data System (ADS)

    Tonttila, Juha; Hoose, Corinna; Milbrandt, Jason; Morrison, Hugh

    2017-04-01

    The representation of ice-phase cloud microphysics in ICON-LEM (the Large-Eddy Model configuration of the ICOsahedral Nonhydrostatic model) is improved by implementing the recently published Predicted Particle Properties (P3) scheme into the model. In the typical two-moment microphysical schemes, such as that previously used in ICON-LEM, ice-phase particles must be partitioned into several prescribed categories. It is inherently difficult to distinguish between categories such as graupel and hail based on just the particle size, yet this partitioning may significantly affect the simulation of convective clouds. The P3 scheme avoids the problems associated with predefined ice-phase categories that are inherent in traditional microphysics schemes by introducing the concept of "free" ice-phase categories, whereby the prognostic variables enable the prediction of a wide range of smoothly varying physical properties and hence particle types. To our knowledge, this is the first application of the P3 scheme in a large-eddy model with horizontal grid spacings on the order of 100 m. We will present results from ICON-LEM simulations with the new P3 scheme comprising idealized stratiform and convective cloud cases. We will also present real-case limited-area simulations focusing on the HOPE (HD(CP)2 Observational Prototype Experiment) intensive observation campaign. The results are compared with a matching set of simulations employing the two-moment scheme and the performance of the model is also evaluated against observations in the context of the HOPE simulations, comprising data from ground based remote sensing instruments.

  16. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less

  17. Mixed analytical-stochastic simulation method for the recovery of a Brownian gradient source from probability fluxes to small windows.

    PubMed

    Dobramysl, U; Holcman, D

    2018-02-15

    Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.

  18. Modeling and simulation of dust behaviors behind a moving vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jingfang

    Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.

  19. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    PubMed

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  20. On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

    PubMed Central

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  1. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

    PubMed Central

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung

    2017-01-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617

  2. Local Cloudiness Development Forecast Based on Simulation of Solid Phase Formation Processes in the Atmosphere

    NASA Astrophysics Data System (ADS)

    Barodka, Siarhei; Kliutko, Yauhenia; Krasouski, Alexander; Papko, Iryna; Svetashev, Alexander; Turishev, Leonid

    2013-04-01

    Nowadays numerical simulation of thundercloud formation processes is of great interest as an actual problem from the practical point of view. Thunderclouds significantly affect airplane flights, and mesoscale weather forecast has much to contribute to facilitate the aviation forecast procedures. An accurate forecast can certainly help to avoid aviation accidents due to weather conditions. The present study focuses on modelling of the convective clouds development and thunder clouds detection on the basis of mesoscale atmospheric processes simulation, aiming at significantly improving the aeronautical forecast. In the analysis, the primary weather radar information has been used to be further adapted for mesoscale forecast systems. Two types of domains have been selected for modelling: an internal one (with radius of 8 km), and an external one (with radius of 300 km). The internal domain has been directly applied to study the local clouds development, and the external domain data has been treated as initial and final conditions for cloud cover formation. The domain height has been chosen according to the civil aviation forecast data (i.e. not exceeding 14 km). Simulations of weather conditions and local clouds development have been made within selected domains with the WRF modelling system. In several cases, thunderclouds are detected within the convective clouds. To specify the given category of clouds, we employ a simulation technique of solid phase formation processes in the atmosphere. Based on modelling results, we construct vertical profiles indicating the amount of solid phase in the atmosphere. Furthermore, we obtain profiles demonstrating the amount of ice particles and large particles (hailstones). While simulating the processes of solid phase formation, we investigate vertical and horizontal air flows. Consequently, we attempt to separate the total amount of solid phase into categories of small ice particles, large ice particles and hailstones. Also, we strive to reveal and differentiate the basic atmospheric parameters of sublimation and coagulation processes, aiming to predict ice particles precipitation. To analyze modelling results we apply the VAPOR three-dimensional visualization package. For the chosen domains, a diurnal synoptic situation has been simulated, including rain, sleet, ice pellets, and hail. As a result, we have obtained a large scope of data describing various atmospheric parameters: cloud cover, major wind components, basic levels of isobaric surfaces, and precipitation rate. Based on this data, we show both distinction in precipitation formation due to various heights and its differentiation of the ice particles. The relation between particle rise in the atmosphere and its size is analyzed: at 8-10 km altitude large ice particles, resulted from coagulation, dominate, while at 6-7 km altitude one can find snow and small ice particles formed by condensation growth. Also, mechanical trajectories of solid precipitation particles for various ice formation processes have been calculated.

  3. Particle Swarm Optimization with Double Learning Patterns.

    PubMed

    Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian

    2016-01-01

    Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants.

  4. Second order upwind Lagrangian particle method for Euler equations

    DOE PAGES

    Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin

    2016-06-01

    A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less

  5. Second order upwind Lagrangian particle method for Euler equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin

    A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less

  6. Hybrid molecular-continuum simulations using smoothed dissipative particle dynamics

    PubMed Central

    Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott

    2015-01-01

    We present a new multiscale simulation methodology for coupling a region with atomistic detail simulated via molecular dynamics (MD) to a numerical solution of the fluctuating Navier-Stokes equations obtained from smoothed dissipative particle dynamics (SDPD). In this approach, chemical potential gradients emerge due to differences in resolution within the total system and are reduced by introducing a pairwise thermodynamic force inside the buffer region between the two domains where particles change from MD to SDPD types. When combined with a multi-resolution SDPD approach, such as the one proposed by Kulkarni et al. [J. Chem. Phys. 138, 234105 (2013)], this method makes it possible to systematically couple atomistic models to arbitrarily coarse continuum domains modeled as SDPD fluids with varying resolution. We test this technique by showing that it correctly reproduces thermodynamic properties across the entire simulation domain for a simple Lennard-Jones fluid. Furthermore, we demonstrate that this approach is also suitable for non-equilibrium problems by applying it to simulations of the start up of shear flow. The robustness of the method is illustrated with two different flow scenarios in which shear forces act in directions parallel and perpendicular to the interface separating the continuum and atomistic domains. In both cases, we obtain the correct transient velocity profile. We also perform a triple-scale shear flow simulation where we include two SDPD regions with different resolutions in addition to a MD domain, illustrating the feasibility of a three-scale coupling. PMID:25637963

  7. Tango

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Jeffrey

    Tango enables the accelerated numerical solution of the multiscale problem of self-consistent transport and turbulence. Fast turbulence results in fluxes of heat and particles that slowly change the mean profiles of temperature and density. The fluxes are computed by separate turbulence simulation codes; Tang solves for the self-consistent change in mean temperature or density given those fluxes.

  8. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  9. An axisymmetric PFEM formulation for bottle forming simulation

    NASA Astrophysics Data System (ADS)

    Ryzhakov, Pavel B.

    2017-01-01

    A numerical model for bottle forming simulation is proposed. It is based upon the Particle Finite Element Method (PFEM) and is developed for the simulation of bottles characterized by rotational symmetry. The PFEM strategy is adapted to suit the problem of interest. Axisymmetric version of the formulation is developed and a modified contact algorithm is applied. This results in a method characterized by excellent computational efficiency and volume conservation characteristics. The model is validated. An example modelling the final blow process is solved. Bottle wall thickness is estimated and the mass conservation of the method is analysed.

  10. Cosmological simulations of decaying dark matter: implications for small-scale structure of dark matter haloes

    NASA Astrophysics Data System (ADS)

    Wang, Mei-Yu; Peter, Annika H. G.; Strigari, Louis E.; Zentner, Andrew R.; Arant, Bryan; Garrison-Kimmel, Shea; Rocha, Miguel

    2014-11-01

    We present a set of N-body simulations of a class of models in which an unstable dark matter particle decays into a stable dark matter particle and a non-interacting light particle with decay lifetime comparable to the Hubble time. We study the effects of the recoil kick velocity (Vk) received by the stable dark matter on the structures of dark matter haloes ranging from galaxy-cluster to Milky Way-mass scales. For Milky Way-mass haloes, we use high-resolution, zoom-in simulations to explore the effects of decays on Galactic substructure. In general, haloes with circular velocities comparable to the magnitude of kick velocity are most strongly affected by decays. We show that models with lifetimes Γ-1 ˜ H_0^{-1} and recoil speeds Vk ˜ 20-40 km s-1 can significantly reduce both the abundance of Galactic subhaloes and their internal densities. We find that decaying dark matter models that do not violate current astrophysical constraints can significantly mitigate both the `missing satellites problem' and the more recent `too big to fail problem'. These decaying models predict significant time evolution of haloes, and this implies that at high redshifts decaying models exhibit the similar sequence of structure formation as cold dark matter. Thus, decaying dark matter models are significantly less constrained by high-redshift phenomena than warm dark matter models. We conclude that models of decaying dark matter make predictions that are relevant for the interpretation of small galaxies observations in the Local Group and can be tested as well as by forthcoming large-scale surveys.

  11. The Challenge of Incorporating Charged Dust in the Physics of Flowing Plasma Interactions

    NASA Astrophysics Data System (ADS)

    Jia, Y.; Russell, C. T.; Ma, Y.; Lai, H.; Jian, L.; Toth, G.

    2013-12-01

    The presence of two oppositely charged species with very different mass ratios leads to interesting physical processes and difficult numerical simulations. The reconnection problem is a classic example of this principle with a proton-electron mass ratio of 1836, but it is not the only example. Increasingly we are discovering situations in which heavy, electrically charged dust particles are major players in a plasma interaction. The mass of a 1mm dust particle is about 2000 proton masses and of a 10 mm dust particle about 2 million proton masses. One example comes from planetary magnetospheres. Charged dust pervades Enceladus' southern plume. The saturnian magnetospheric plasma flows through this dusty plume interacting with the charged dust and ionized plume gas. Multiple wakes are seen downstream. The flow is diverted in one direction. The field aligned-current systems are elsewhere. How can these two wake features be understood? Next we have an example from the solar wind. When asteroids collide in a disruptive collision, the solar wind strips the nano-scale charged dust from the debris forming a dusty plasma cloud that may be over 106km in extent and containing over 100 million kg of dust accelerated to the solar wind speed. How does this occur, especially as rapidly as it appears to happen? In this paper we illustrate a start on understanding these phenomena using multifluid MHD simulations but these simulations are only part of the answer to this complex problem that needs attention from a broader range of the community.

  12. Design and Simulation of a MEMS Structure for Electrophoretic and Dielectrophoretic Separation of Particles by Contactless Electrodes

    NASA Technical Reports Server (NTRS)

    Shaw, Harry C.

    2007-01-01

    Rapid identification of pathogenic bacterial species is an important factor in combating public health problems such as E. coli contamination. Food and waterborne pathogens account for sickness in 76 million people annually (CDC). Diarrheagenic E. coli is a major source of gastrointestinal illness. Severe sepsis and Septicemia within the hospital environment are also major problems. 75 1,000 cases annually with a 30-50% mortality rate (Crit Care Med, July '01, Vol. 29, 1303-10). Patient risks run the continuum from fever to organ failure and death. Misdiagnosis or inappropriate treatment increases mortality. There exists a need for rapid screening of samples for identification of pathogenic species (Certain E. coli strains are essential for health). Critical to the identification process is the ability to isolate analytes of interest rapidly. This poster discusses novel devices for the separation of particles on the basis of the dielectric properties, mass and surface charge characteristics is presented. Existing designs involve contact between electrode surfaces and analyte medium resulting in contamination of the electrode bearing elements Two different device designs using different bulk micromachining MEMS processes (PolyMUMPS and a PyrexBIGold electrode design) are presented. These designs cover a range of particle sizes from small molecules through eucaryotic cells. The application of separation of bacteria is discussed in detail. Simulation data for electrostatic and microfluidic characteristics are provided. Detailed design characteristics and physical features of the as fabricated PolyMUMPS design are provided. Analysis of the simulation data relative to the expected performance of the devices will be provided and subsequent conclusions discussed.

  13. Simulation of 2D Granular Hopper Flow

    NASA Astrophysics Data System (ADS)

    Li, Zhusong; Shattuck, Mark

    2012-02-01

    Jamming and intermittent granular flow are big problems in industry, and the vertical hopper is a canonical example of these difficulties. We simulate gravity driven flow and jamming of 2D disks in a vertical hopper and compare with identical companion experiments presented in this session. We measure and compare the flow rate and probability for jamming as a function of particle properties and geometry. We evaluate the ability of standard Hertz-Mindlin contact mode to quantitatively predict the experimental flow.

  14. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  15. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGES

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; ...

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  16. Taylor dispersion of colloidal particles in narrow channels

    NASA Astrophysics Data System (ADS)

    Sané, Jimaan; Padding, Johan T.; Louis, Ard A.

    2015-09-01

    We use a mesoscopic particle-based simulation technique to study the classic convection-diffusion problem of Taylor dispersion for colloidal discs in confined flow. When the disc diameter becomes non-negligible compared to the diameter of the pipe, there are important corrections to the original Taylor picture. For example, the colloids can flow more rapidly than the underlying fluid, and their Taylor dispersion coefficient is decreased. For narrow pipes, there are also further hydrodynamic wall effects. The long-time tails in the velocity autocorrelation functions are altered by the Poiseuille flow.

  17. Coupled fluid and solid mechanics study for improved permeability estimation of fines' invaded porous materials

    NASA Astrophysics Data System (ADS)

    Mirabolghasemi, M.; Prodanovic, M.

    2012-12-01

    The problem of fine particle infiltration is seen in fields from subsurface transport, to drug delivery to industrial slurry flows. Sediment filtration and pathogen retention are well-known subsurface engineering problems that have been extensively studied through different macroscopic, microscopic and experimental modeling techniques Due to heterogeneity, standard constitutive relationships and models yield poor predictions for flow (e.g. permeability) and rock properties (e.g. elastic moduli) of the invaded (damaged) porous media. This severely reduces our ability to, for instance, predict retention, pressure build-up, newly formed flow pathways or porous medium mechanical behavior. We chose a coupled computational fluid dynamics (CFD) - discrete element modeling (DEM) approach to simulate the particulate flow through porous media represented by sphere packings. In order to minimize the uncertainty involved in estimating the flow properties of porous media on Darcy scale and address the dynamic nature of filtration process, this microscopic approach is adapted as a robust method that can incorporate particle interaction physics as well as the heterogeneity of the porous medium.. The coupled simulation was done in open-source packages which has both CFD (openFOAM) and DEM components (LIGGGHTS). We ran several sensitivity analyses over different parameters such as particle/grain size ratio, fluid viscosity, flow rate and sphere packing porosity in order to investigate their effects on the depth of invasion and damaged porous medium permeability. The response of the system to the variation of different parameters is reflected through different clogging mechanism; for instance, bridging is the dominant mechanism of pore-throat clogging when larger particles penetrate into the packing, whereas, in case of fine particles which are much smaller than porous medium grains (1/20 in diameter), this mechanism is not very effective due to the frequent formation and destruction of particle bridges. Finally, depending on the material and fluids that penetrate into the porous medium, the ionic forces might play a significant role in the filtration process. We thus also report on influence of particle attachment (and detachment) on the type of clogging mechanisms. Pore scale simulations allow for visualization and understanding of fundamental processes, and, further, the velocity fields are integrated into a distinctly non-monotonic permeability-porosity/(depth of penetration) relationship.

  18. Close packing in curved space by simulated annealing

    NASA Astrophysics Data System (ADS)

    Wille, L. T.

    1987-12-01

    The problem of packing spheres of a maximum radius on the surface of a four-dimensional hypersphere is considered. It is shown how near-optimal solutions can be obtained by packing soft spheres, modelled as classical particles interacting under an inverse power potential, followed by a subsequent hardening of the interaction. In order to avoid trapping in high-lying local minima, the simulated annealing method is used to optimise the soft-sphere packing. Several improvements over other work (based on local optimisation of random initial configurations of hard spheres) have been found. The freezing behaviour of this system is discussed as a function of particle number, softness of the potential and cooling rate. Apart from their geometric interest, these results are useful in the study of topological frustration, metallic glasses and quasicrystals.

  19. A Keplerian-based Hamiltonian splitting for gravitational N-body simulations

    NASA Astrophysics Data System (ADS)

    Gonçalves Ferrari, G.; Boekholt, T.; Portegies Zwart, S. F.

    2014-05-01

    We developed a Keplerian-based Hamiltonian splitting for solving the gravitational N-body problem. This splitting allows us to approximate the solution of a general N-body problem by a composition of multiple, independently evolved two-body problems. While the Hamiltonian splitting is exact, we show that the composition of independent two-body problems results in a non-symplectic non-time-symmetric first-order map. A time-symmetric second-order map is then constructed by composing this basic first-order map with its self-adjoint. The resulting method is precise for each individual two-body solution and produces quick and accurate results for near-Keplerian N-body systems, like planetary systems or a cluster of stars that orbit a supermassive black hole. The method is also suitable for integration of N-body systems with intrinsic hierarchies, like a star cluster with primordial binaries. The superposition of Kepler solutions for each pair of particles makes the method excellently suited for parallel computing; we achieve ≳64 per cent efficiency for only eight particles per core, but close to perfect scaling for 16 384 particles on a 128 core distributed-memory computer. We present several implementations in SAKURA, one of which is publicly available via the AMUSE framework.

  20. Symplectic multi-particle tracking on GPUs

    NASA Astrophysics Data System (ADS)

    Liu, Zhicong; Qiang, Ji

    2018-05-01

    A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

  1. Modelling the propagation of smoke from a tanker fire in a built-up area.

    PubMed

    Brzozowska, Lucyna

    2014-02-15

    The paper presents the application of a Lagrangian particle model to problems connected with safety in road transport. Numerical simulations were performed for a hypothetical case of smoke emission from a tanker fire in a built-up area. Propagation of smoke was analysed for three wind directions. A diagnostic model was used to determine the air velocity field, whereas the dispersion of pollutants was analysed by means of a Lagrangian particle model (Brzozowska, 2013). The Idrisi Andes geographic information system was used to provide data on landforms and on their aerodynamic roughness. The presented results of computations and their analysis exemplify a possible application of the Lagrangian particle model: evaluation of mean (averaged over time) concentrations of pollutants and their distribution in the considered area (especially important due to the protection of people, animals and plants) and simulation of the propagation of harmful compounds in time as well as performing computations for cases of the potential effects of road incidents. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Hybrid Vlasov simulations for alpha particles heating in the solar wind

    NASA Astrophysics Data System (ADS)

    Perrone, Denise; Valentini, Francesco; Veltri, Pierluigi

    2011-06-01

    Heating and acceleration of heavy ions in the solar wind and corona represent a long-standing theoretical problem in space physics and are distinct experimental signatures of kinetic processes occurring in collisionless plasmas. To address this problem, we propose the use of a low-noise hybrid-Vlasov code in four dimensional phase space (1D in physical space and 3D in velocity space) configuration. We trigger a turbulent cascade injecting the energy at large wavelengths and analyze the role of kinetic effects along the development of the energy spectra. Following the evolution of both proton and α distribution functions shows that both the ion species significantly depart from the maxwellian equilibrium, with the appearance of beams of accelerated particles in the direction parallel to the background magnetic field.

  3. Interaction of a shock wave with an array of particles and effect of particles on the shock wave weakening

    NASA Astrophysics Data System (ADS)

    Bulat, P. V.; Ilyina, T. E.; Volkov, K. N.; Silnikov, M. V.; Chernyshov, M. V.

    2017-06-01

    Two-phase systems that involve gas-particle or gas-droplet flows are widely used in aerospace and power engineering. The problems of weakening and suppression of detonation during saturation of a gas or liquid flow with the array of solid particles are considered. The tasks, associated with the formation of particles arrays, dust lifting behind a travelling shock wave, ignition of particles in high-speed and high-temperature gas flows are adjoined to safety of space flight. The mathematical models of shock wave interaction with the array of solid particles are discussed, and numerical methods are briefly described. The numerical simulations of interaction between sub- and supersonic flows and an array of particles being in motionless state at the initial time are performed. Calculations are carried out taking into account the influence that the particles cause on the flow of carrier gas. The results obtained show that inert particles significantly weaken the shock waves up to their suppression, which can be used to enhance the explosion safety of spacecrafts.

  4. Towards predictive simulations of soot formation: from surrogate to turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanquart, Guillaume

    The combustion of transportation fuels leads to the formation of several kinds of pollutants, among which are soot particles. These particles, also formed during coal combustion and in fires, are the source of several health problems and environmental issues. Unfortunately, our current understanding of the chemical and physical phenomena leading to the formation of soot particles remains incomplete, and as a result, the predictive capability of our numerical tools is lacking. The objective of the work was to reduce the gap in the present understanding and modeling of soot formation both in laminar and turbulent flames. The effort spanned severalmore » length scales from the molecular level to large scale turbulent transport.« less

  5. Preferrential Concentration of Particles in Protoplanetary Nebula Turbulence

    NASA Technical Reports Server (NTRS)

    Hartlep, Thomas; Cuzzi, Jeffrey N.

    2015-01-01

    Preferential concentration in turbulence is a process that causes inertial particles to cluster in regions of high strain (in-between high vorticity regions), with specifics depending on their stopping time or Stokes number. This process is thought to be of importance in various problems including cloud droplet formation and aerosol transport in the atmosphere, sprays, and also in the formation of asteroids and comets in protoplanetary nebulae. In protoplanetary nebulae, the initial accretion of primitive bodies from freely-floating particles remains a problematic subject. Traditional growth-by-sticking models encounter a formidable "meter-size barrier" [1] in turbulent nebulae. One scenario that can lead directly from independent nebula particulates to large objects, avoiding the problematic m-km size range, involves formation of dense clumps of aerodynamically selected, typically mm-size particles in protoplanetary turbulence. There is evidence that at least the ordinary chondrite parent bodies were initially composed entirely of a homogeneous mix of such particles generally known as "chondrules" [2]. Thus, while it is arcane, turbulent preferential concentration acting directly on chondrule size particles are worthy of deeper study. Here, we present the statistical determination of particle multiplier distributions from numerical simulations of particle-laden isotopic turbulence, and a cascade model for modeling turbulent concentration at lengthscales and Reynolds numbers not accessible by numerical simulations. We find that the multiplier distributions are scale dependent at the very largest scales but have scale-invariant properties under a particular variable normalization at smaller scales.

  6. Pairwise-interaction extended point-particle model for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Akiki, G.; Moore, W. C.; Balachandar, S.

    2017-12-01

    In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.

  7. Biogeography-based particle swarm optimization with fuzzy elitism and its applications to constrained engineering problems

    NASA Astrophysics Data System (ADS)

    Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang

    2014-11-01

    In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.

  8. Conversion of magnetic energy in the magnetic reconnection layer of a laboratory plasma

    DOE PAGES

    Yamada, Masaaki; Yoo, Jongsoo; Jara-Almonte, Jonathan; ...

    2014-09-10

    Magnetic reconnection, in which magnetic field lines break and reconnect to change their topology, occurs throughout the universe. The essential feature of reconnection is that it energizes plasma particles by converting magnetic energy. Despite the long history of reconnection research, how this energy conversion occurs remains a major unresolved problem in plasma physics. Here we report that the energy conversion in a laboratory reconnection layer occurs in a much larger region than previously considered. The mechanisms for energizing plasma particles in the reconnection layer are identified, and a quantitative inventory of the converted energy is presented for the first timemore » in a well defined reconnection layer; 50% of the magnetic energy is converted to particle energy, 2/3 of which transferred to ions and 1/3 to electrons. Our results are compared with simulations and space measurements, for a key step toward resolving one of the most important problems in plasma physics.« less

  9. Long-Ranged Oppositely Charged Interactions for Designing New Types of Colloidal Clusters

    NASA Astrophysics Data System (ADS)

    Demirörs, Ahmet Faik; Stiefelhagen, Johan C. P.; Vissers, Teun; Smallenburg, Frank; Dijkstra, Marjolein; Imhof, Arnout; van Blaaderen, Alfons

    2015-04-01

    Getting control over the valency of colloids is not trivial and has been a long-desired goal for the colloidal domain. Typically, tuning the preferred number of neighbors for colloidal particles requires directional bonding, as in the case of patchy particles, which is difficult to realize experimentally. Here, we demonstrate a general method for creating the colloidal analogs of molecules and other new regular colloidal clusters without using patchiness or complex bonding schemes (e.g., DNA coating) by using a combination of long-ranged attractive and repulsive interactions between oppositely charged particles that also enable regular clusters of particles not all in close contact. We show that, due to the interplay between their attractions and repulsions, oppositely charged particles dispersed in an intermediate dielectric constant (4 <ɛ <10 ) provide a viable approach for the formation of binary colloidal clusters. Tuning the size ratio and interactions of the particles enables control of the type and shape of the resulting regular colloidal clusters. Finally, we present an example of clusters made up of negatively charged large and positively charged small satellite particles, for which the electrostatic properties and interactions can be changed with an electric field. It appears that for sufficiently strong fields the satellite particles can move over the surface of the host particles and polarize the clusters. For even stronger fields, the satellite particles can be completely pulled off, reversing the net charge on the cluster. With computer simulations, we investigate how charged particles distribute on an oppositely charged sphere to minimize their energy and compare the results with the solutions to the well-known Thomson problem. We also use the simulations to explore the dependence of such clusters on Debye screening length κ-1 and the ratio of charges on the particles, showing good agreement with experimental observations.

  10. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks

    PubMed Central

    Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-01-01

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252

  11. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  12. Effective friction of granular flows made of non-spherical particles

    NASA Astrophysics Data System (ADS)

    Somfai, Ellák; Nagy, Dániel B.; Claudin, Philippe; Favier, Adeline; Kálmán, Dávid; Börzsönyi, Tamás

    2017-06-01

    Understanding the rheology of dense granular matter is a long standing problem and is important both from the fundamental and the applied point of view. As the basic building blocks of granular materials are macroscopic particles, the nature of both the response to deformations and the dissipation is very different from that of molecular materials. In the absence of large gradients, the best approach formulates the constitutive equation as an effective friction: for sheared granular matter the ratio of the off-diagonal and the diagonal elements of the stress tensor depends only on dynamical parameters, in particular the inertial number. In this work we employ numerical simulations to extend this formalism to granular packings made of frictionless elongated particles. We measured how the shape of the particles affects the effective friction, volume fraction and first normal stress difference, and compared it to the spherical particle case. We had to introduce polydispersity in particle size in order to keep the systems of the more elongated particles disordered.

  13. Hydrodynamic Simulations of Ejecta Production From Shocked Metallic Surfaces

    NASA Astrophysics Data System (ADS)

    Karkhanis, Varad Abhimanyu

    The phenomenon of mass ejection into vacuum from a shocked metallic free surfaces can have a deleterious effect on the implosion phase of the Inertial Confinement Fusion (ICF) process. Often, the ejecta take the form of a cloud of particles that are the result of microjetting sourced from imperfections on the metallic free surface. Significant progress has been achieved in the understanding of ejecta dynamics by treating the process as a limiting case of the baroclinically-driven Richtmyer-Meshkov Instability (RMI). This conceptual picture is complicated by several practical considerations including breakup of spikes due to surface tension and yield strength of the metal. Thus, the problem involves a wide range of physical phenomena, occurring often under extreme conditions of material behavior. We describe an approach in which continuum simulations using ideal gases can be used to capture key aspects of ejecta growth associated with the RMI. The approach exploits the analogy between the Rankine-Hugoniot jump conditions for ideal gases and the linear relationship between the shock velocity and particle velocity governing shocked metals. Such simulations with Upsilon-law fluids have been successful in accurately predicting the velocity and mass of ejecta for different shapes, and in excellent agreement with experiments. We use the astrophysical FLASH code, developed at the University of Chicago to model this problem. Based on insights from our simulations, we suggest a modified expression for ejecta velocities that is valid for large initial perturbation amplitudes. The expression for velocities is extended to ejecta originating from cavities with any arbitrary shape. The simulations are also used to validate a recently proposed source model for ejecta that predicts the ejected mass per unit area for sinusoidal and non-standard shapes. Such simulations and theoretical models play an important role in the design of target experiment campaigns.

  14. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  15. Simulations of Bingham plastic flows with the multiple-relaxation-time lattice Boltzmann model

    NASA Astrophysics Data System (ADS)

    Chen, SongGui; Sun, QiCheng; Jin, Feng; Liu, JianGuo

    2014-03-01

    Fresh cement mortar is a type of workable paste, which can be well approximated as a Bingham plastic and whose flow behavior is of major concern in engineering. In this paper, Papanastasiou's model for Bingham fluids is solved by using the multiplerelaxation-time lattice Boltzmann model (MRT-LB). Analysis of the stress growth exponent m in Bingham fluid flow simulations shows that Papanastasiou's model provides a good approximation of realistic Bingham plastics for values of m > 108. For lower values of m, Papanastasiou's model is valid for fluids between Bingham and Newtonian fluids. The MRT-LB model is validated by two benchmark problems: 2D steady Poiseuille flows and lid-driven cavity flows. Comparing the numerical results of the velocity distributions with corresponding analytical solutions shows that the MRT-LB model is appropriate for studying Bingham fluids while also providing better numerical stability. We further apply the MRT-LB model to simulate flow through a sudden expansion channel and the flow surrounding a round particle. Besides the rich flow structures obtained in this work, the dynamics fluid force on the round particle is calculated. Results show that both the Reynolds number Re and the Bingham number Bn affect the drag coefficients C D , and a drag coefficient with Re and Bn being taken into account is proposed. The relationship of Bn and the ratio of unyielded zone thickness to particle diameter is also analyzed. Finally, the Bingham fluid flowing around a set of randomly dispersed particles is simulated to obtain the apparent viscosity and velocity fields. These results help simulation of fresh concrete flowing in porous media.

  16. Boundary particle method for Laplace transformed time fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian

    2013-02-01

    This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.

  17. A density-adaptive SPH method with kernel gradient correction for modeling explosive welding

    NASA Astrophysics Data System (ADS)

    Liu, M. B.; Zhang, Z. L.; Feng, D. L.

    2017-09-01

    Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.

  18. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  19. A new approach to simulating collisionless dark matter fluids

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom; Kaehler, Ralf

    2013-09-01

    Recently, we have shown how current cosmological N-body codes already follow the fine grained phase-space information of the dark matter fluid. Using a tetrahedral tessellation of the three-dimensional manifold that describes perfectly cold fluids in six-dimensional phase space, the phase-space distribution function can be followed throughout the simulation. This allows one to project the distribution function into configuration space to obtain highly accurate densities, velocities and velocity dispersions. Here, we exploit this technique to show first steps on how to devise an improved particle-mesh technique. At its heart, the new method thus relies on a piecewise linear approximation of the phase-space distribution function rather than the usual particle discretization. We use pseudo-particles that approximate the masses of the tetrahedral cells up to quadrupolar order as the locations for cloud-in-cell (CIC) deposit instead of the particle locations themselves as in standard CIC deposit. We demonstrate that this modification already gives much improved stability and more accurate dynamics of the collisionless dark matter fluid at high force and low mass resolution. We demonstrate the validity and advantages of this method with various test problems as well as hot/warm dark matter simulations which have been known to exhibit artificial fragmentation. This completely unphysical behaviour is much reduced in the new approach. The current limitations of our approach are discussed in detail and future improvements are outlined.

  20. APPLICATION OF FLOW SIMULATION FOR EVALUATION OF FILLING-ABILITY OF SELF-COMPACTING CONCRETE

    NASA Astrophysics Data System (ADS)

    Urano, Shinji; Nemoto, Hiroshi; Sakihara, Kohei

    In this paper, MPS method was applied to fluid an alysis of self-compacting concrete. MPS method is one of the particle method, and it is suitable for the simulation of moving boundary or free surface problems and large deformation problems. The constitutive equation of self-compacting concrete is assumed as bingham model. In order to investigate flow Stoppage and flow speed of self-compacting concrete, numerical analysis examples of slump flow and L-flow test were performed. In addition, to evaluate verification of compactability of self-compacting concrete, numerical analys is examples of compaction at the part of CFT diaphragm were performed. As a result, it was found that the MPS method was suitable for the simulation of compaction of self-compacting concrete, and a just appraisal was obtained by setting shear strain rate of flow-limit πc and limitation point of segregation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peratt, A.L.; Mostrom, M.A.

    With the availability of 80--125 MHz microprocessors, the methodology developed for the simulation of problems in pulsed power and plasma physics on modern day supercomputers is now amenable to application on a wide range of platforms including laptops and workstations. While execution speeds with these processors do not match those of large scale computing machines, resources such as computer-aided-design (CAD) and graphical analysis codes are available to automate simulation setup and process data. This paper reports on the adaptation of IVORY, a three-dimensional, fully-electromagnetic, particle-in-cell simulation code, to this platform independent CAD environment. The primary purpose of this talk ismore » to demonstrate how rapidly a pulsed power/plasma problem can be scoped out by an experimenter on a dedicated workstation. Demonstrations include a magnetically insulated transmission line, power flow in a graded insulator stack, a relativistic klystron oscillator, and the dynamics of a coaxial thruster for space applications.« less

  2. Convergence studies in meshfree peridynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seleson, Pablo; Littlewood, David J.

    2016-04-15

    Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less

  3. On the high fidelity simulation of chemical explosions and their interaction with solid particle clouds

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Kaushik

    The flow field behind chemical explosions in multiphase environments is investigated using a robust, state-of-the-art simulation strategy that accounts for the thermodynamics, gas dynamics and fluid mechanics of relevance to the problem. Focus is laid on the investigation of blast wave propagation, growth of hydrodynamic instabilities behind explosive blasts, the mixing aspects behind explosions, the effects of afterburn and its quantification, and the role played by solid particles in these phenomena. In particular, the confluence and interplay of these different physical phenomena are explored from a fundamental perspective, and applied to the problem of chemical explosions. A solid phase solver suited for the study of high-speed, two-phase flows has been developed and validated. This solver accounts for the inter-phase mass, momentum and energy transfer through empirical laws, and ensures two-way coupling between the two phases, viz. solid particles and gas. For dense flow fields, i.e., when the solid volume fraction becomes non-negligible (˜60%), the finite volume method with a Godunov type shock-capturing scheme requires modifications to account for volume fraction gradients during the computation of cell interface gas fluxes. To this end, the simulation methodology is extended with the formulation of an Eulerian gas, Lagrangian solid approach, thereby ensuring that the so developed two-phase simulation strategy can be applied for both flow conditions, dilute and dense alike. Moreover, under dense loading conditions the solid particles inevitably collide, which is accounted for in the current research effort with the use of an empirical collision/contact model from literature. Furthermore, the post-detonation flow field consists of gases under extreme temperature and pressure conditions, necessitating the use of real gas equations of state in the multiphase model. This overall simulation strategy is then extended to the investigation of chemical explosions in multiphase environments, with emphasis on the study of hydrodynamic instability growth, mixing, afterburn effects ensuing from the process, particle ignition and combustion (if reactive), dispersion, and their interaction with the vortices in the mixing layer. The post-detonation behavior of heterogeneous explosives is addressed by using three parts to the investigation. In the first part, only one-dimensional effects are considered, with the goal to assess the presently developed dense two-phase formulation. The total deliverable impulsive loading from heterogeneous explosive charges containing inert steel particles is estimated for a suite of operating parameters and compared, and it is demonstrated that heterogeneous explosive charges deliver a higher near-field impulse than homogeneous explosive charges containing the same mass of the high explosive. In the second part, three-dimensional effects such as hydrodynamic instabilities are accounted for, with the focus on characterizing the mixing layer ensuing from the detonation of heterogeneous explosive charges containing inert steel particles. It is shown that particles introduce significant amounts of hydrodynamic instabilities in the mixing layer, resulting in additional physical phenomena that play a prominent role in the flow features. In particular, the fluctuation intensities, fireball size and growth rates are augmented for heterogeneous explosions vis-a-vis homogeneous explosions, thereby demonstrating that solid particles enhance the perturbation intensities in the flow. In the third part of the investigation of heterogeneous explosions, dense, aluminized explosions are considered, and the particles are observed to burn in two phases, with an initial quenching due to the rarefaction wave, and a final quenching outside the fireball. Due to faster response time scales, smaller particles are observed to heat and accelerate more during early times, and also cool and decelerate more at late times, compared to counterpart larger particle sizes. Furthermore, the average particle velocities at late times are observed to be independent of the initial solid volume fraction in the explosive charge, as the particles eventually reach an equilibrium with the local gas. These studies have provided some crucial insights to the flow physics of dense, aluminized explosives. (Abstract shortened by UMI.)

  4. Fast particles in a steady-state compact FNS and compact ST reactor

    NASA Astrophysics Data System (ADS)

    Gryaznevich, M. P.; Nicolai, A.; Buxton, P.

    2014-10-01

    This paper presents results of studies of fast particles (ions and alpha particles) in a steady-state compact fusion neutron source (CFNS) and a compact spherical tokamak (ST) reactor with Monte-Carlo and Fokker-Planck codes. Full-orbit simulations of fast particle physics indicate that a compact high field ST can be optimized for energy production by a reduction of the necessary (for the alpha containment) plasma current compared with predictions made using simple analytic expressions, or using guiding centre approximation in a numerical code. Alpha particle losses may result in significant heating and erosion of the first wall, so such losses for an ST pilot plant have been calculated and total and peak wall loads dependence on the plasma current has been studied. The problem of dilution has been investigated and results for compact and big size devices are compared.

  5. Electrophoresis demonstration on Apollo 16

    NASA Technical Reports Server (NTRS)

    Snyder, R. S.

    1972-01-01

    Free fluid electrophoresis, a process used to separate particulate species according to surface charge, size, or shape was suggested as a promising technique to utilize the near zero gravity condition of space. Fluid electrophoresis on earth is disturbed by gravity-induced thermal convection and sedimentation. An apparatus was developed to demonstrate the principle and possible problems of electrophoresis on Apollo 14 and the separation boundary between red and blue dye was photographed in space. The basic operating elements of the Apollo 14 unit were used for a second flight demonstration on Apollo 16. Polystyrene latex particles of two different sizes were used to simulate the electrophoresis of large biological particles. The particle bands in space were extremely stable compared to ground operation because convection in the fluid was negligible. Electrophoresis of the polystyrene latex particle groups according to size was accomplished although electro-osmosis in the flight apparatus prevented the clear separation of two particle bands.

  6. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  7. Quantum simulation of the integer factorization problem: Bell states in a Penning trap

    NASA Astrophysics Data System (ADS)

    Rosales, Jose Luis; Martin, Vicente

    2018-03-01

    The arithmetic problem of factoring an integer N can be translated into the physics of a quantum device, a result that supports Pólya's and Hilbert's conjecture to demonstrate Riemann's hypothesis. The energies of this system, being univocally related to the factors of N , are the eigenvalues of a bounded Hamiltonian. Here we solve the quantum conditions and show that the histogram of the discrete energies, provided by the spectrum of the system, should be interpreted in number theory as the relative probability for a prime to be a factor candidate of N . This is equivalent to a quantum sieve that is shown to require only o (ln√{N}) 3 energy measurements to solve the problem, recovering Shor's complexity result. Hence the outcome can be seen as a probability map that a pair of primes solve the given factorization problem. Furthermore, we show that a possible embodiment of this quantum simulator corresponds to two entangled particles in a Penning trap. The possibility to build the simulator experimentally is studied in detail. The results show that factoring numbers, many orders of magnitude larger than those computed with experimentally available quantum computers, is achievable using typical parameters in Penning traps.

  8. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  9. Engineering Fracking Fluids with Computer Simulation

    NASA Astrophysics Data System (ADS)

    Shaqfeh, Eric

    2015-11-01

    There are no comprehensive simulation-based tools for engineering the flows of viscoelastic fluid-particle suspensions in fully three-dimensional geometries. On the other hand, the need for such a tool in engineering applications is immense. Suspensions of rigid particles in viscoelastic fluids play key roles in many energy applications. For example, in oil drilling the ``drilling mud'' is a very viscous, viscoelastic fluid designed to shear-thin during drilling, but thicken at stoppage so that the ``cuttings'' can remain suspended. In a related application known as hydraulic fracturing suspensions of solids called ``proppant'' are used to prop open the fracture by pumping them into the well. It is well-known that particle flow and settling in a viscoelastic fluid can be quite different from that which is observed in Newtonian fluids. First, it is now well known that the ``fluid particle split'' at bifurcation cracks is controlled by fluid rheology in a manner that is not understood. Second, in Newtonian fluids, the presence of an imposed shear flow in the direction perpendicular to gravity (which we term a cross or orthogonal shear flow) has no effect on the settling of a spherical particle in Stokes flow (i.e. at vanishingly small Reynolds number). By contrast, in a non-Newtonian liquid, the complex rheological properties induce a nonlinear coupling between the sedimentation and shear flow. Recent experimental data have shown both the shear thinning and the elasticity of the suspending polymeric solutions significantly affects the fluid-particle split at bifurcations, as well as the settling rate of the solids. In the present work, we use the Immersed Boundary Method to develop computer simulations of viscoelastic flow in suspensions of spheres to study these problems. These simulations allow us to understand the detailed physical mechanisms for the remarkable physical behavior seen in practice, and actually suggest design rules for creating new fluid recipes.

  10. Continuum modelling of segregating tridisperse granular chute flow

    NASA Astrophysics Data System (ADS)

    Deng, Zhekai; Umbanhowar, Paul B.; Ottino, Julio M.; Lueptow, Richard M.

    2018-03-01

    Segregation and mixing of size multidisperse granular materials remain challenging problems in many industrial applications. In this paper, we apply a continuum-based model that captures the effects of segregation, diffusion and advection for size tridisperse granular flow in quasi-two-dimensional chute flow. The model uses the kinematics of the flow and other physical parameters such as the diffusion coefficient and the percolation length scale, quantities that can be determined directly from experiment, simulation or theory and that are not arbitrarily adjustable. The predictions from the model are consistent with experimentally validated discrete element method (DEM) simulations over a wide range of flow conditions and particle sizes. The degree of segregation depends on the Péclet number, Pe, defined as the ratio of the segregation rate to the diffusion rate, the relative segregation strength κij between particle species i and j, and a characteristic length L, which is determined by the strength of segregation between smallest and largest particles. A parametric study of particle size, κij, Pe and L demonstrates how particle segregation patterns depend on the interplay of advection, segregation and diffusion. Finally, the segregation pattern is also affected by the velocity profile and the degree of basal slip at the chute surface. The model is applicable to different flow geometries, and should be easily adapted to segregation driven by other particle properties such as density and shape.

  11. Data mining techniques for scientific computing: Application to asymptotic paraxial approximations to model ultrarelativistic particles

    NASA Astrophysics Data System (ADS)

    Assous, Franck; Chaskalovic, Joël

    2011-06-01

    We propose a new approach that consists in using data mining techniques for scientific computing. Indeed, data mining has proved to be efficient in other contexts which deal with huge data like in biology, medicine, marketing, advertising and communications. Our aim, here, is to deal with the important problem of the exploitation of the results produced by any numerical method. Indeed, more and more data are created today by numerical simulations. Thus, it seems necessary to look at efficient tools to analyze them. In this work, we focus our presentation to a test case dedicated to an asymptotic paraxial approximation to model ultrarelativistic particles. Our method directly deals with numerical results of simulations and try to understand what each order of the asymptotic expansion brings to the simulation results over what could be obtained by other lower-order or less accurate means. This new heuristic approach offers new potential applications to treat numerical solutions to mathematical models.

  12. SPH simulation of free surface flow over a sharp-crested weir

    NASA Astrophysics Data System (ADS)

    Ferrari, Angela

    2010-03-01

    In this paper the numerical simulation of a free surface flow over a sharp-crested weir is presented. Since in this case the usual shallow water assumptions are not satisfied, we propose to solve the problem using the full weakly compressible Navier-Stokes equations with the Tait equation of state for water. The numerical method used consists of the new meshless Smooth Particle Hydrodynamics (SPH) formulation proposed by Ferrari et al. (2009) [8], that accurately tracks the free surface profile and provides monotone pressure fields. Thus, the unsteady evolution of the complex moving material interface (free surface) can been properly solved. The simulations involving about half a million of fluid particles have been run in parallel on two of the most powerful High Performance Computing (HPC) facilities in Europe. The validation of the results has been carried out analysing the pressure field and comparing the free surface profiles obtained with the SPH scheme with experimental measurements available in literature [18]. A very good quantitative agreement has been obtained.

  13. MMAPDNG: A new, fast code backed by a memory-mapped database for simulating delayed γ-ray emission with MCNPX package

    NASA Astrophysics Data System (ADS)

    Lou, Tak Pui; Ludewigt, Bernhard

    2015-09-01

    The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code "MMAPDNG" (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.

  14. SPH Simulations of Spherical Bondi Accretion: First Step of Implementing AGN Feedback in Galaxy Formation

    NASA Astrophysics Data System (ADS)

    Barai, Paramita; Proga, D.; Nagamine, K.

    2011-01-01

    Our motivation is to numerically test the assumption of Black Hole (BH) accretion (that the central massive BH of a galaxy accretes mass at the Bondi-Hoyle accretion rate, with ad-hoc choice of parameters), made in many previous galaxy formation studies including AGN feedback. We perform simulations of a spherical distribution of gas, within the radius range 0.1 - 200 pc, accreting onto a central supermassive black hole (the Bondi problem), using the 3D Smoothed Particle Hydrodynamics code Gadget. In our simulations we study the radial distribution of various gas properties (density, velocity, temperature, Mach number). We compute the central mass inflow rate at the inner boundary (0.1 pc), and investigate how different gas properties (initial density and velocity profiles) and computational parameters (simulation outer boundary, particle number) affect the central inflow. Radiative processes (namely heating by a central X-ray corona and gas cooling) have been included in our simulations. We study the thermal history of accreting gas, and identify the contribution of radiative and adiabatic terms in shaping the gas properties. We find that the current implementation of artificial viscosity in the Gadget code causes unwanted extra heating near the inner radius.

  15. Implementation of a hybrid particle code with a PIC description in r–z and a gridless description in ϕ into OSIRIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, A., E-mail: davidsoa@physics.ucla.edu; Tableman, A., E-mail: Tableman@physics.ucla.edu; An, W., E-mail: anweiming@ucla.edu

    2015-01-15

    For many plasma physics problems, three-dimensional and kinetic effects are very important. However, such simulations are very computationally intensive. Fortunately, there is a class of problems for which there is nearly azimuthal symmetry and the dominant three-dimensional physics is captured by the inclusion of only a few azimuthal harmonics. Recently, it was proposed [1] to model one such problem, laser wakefield acceleration, by expanding the fields and currents in azimuthal harmonics and truncating the expansion. The complex amplitudes of the fundamental and first harmonic for the fields were solved on an r–z grid and a procedure for calculating the complexmore » current amplitudes for each particle based on its motion in Cartesian geometry was presented using a Marder's correction to maintain the validity of Gauss's law. In this paper, we describe an implementation of this algorithm into OSIRIS using a rigorous charge conserving current deposition method to maintain the validity of Gauss's law. We show that this algorithm is a hybrid method which uses a particles-in-cell description in r–z and a gridless description in ϕ. We include the ability to keep an arbitrary number of harmonics and higher order particle shapes. Examples for laser wakefield acceleration, plasma wakefield acceleration, and beam loading are also presented and directions for future work are discussed.« less

  16. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  17. An implementation of a tree code on a SIMD, parallel computer

    NASA Technical Reports Server (NTRS)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  18. Incompressible SPH method for simulating Newtonian and non-Newtonian flows with a free surface

    NASA Astrophysics Data System (ADS)

    Shao, Songdong; Lo, Edmond Y. M.

    An incompressible smoothed particle hydrodynamics (SPH) method is presented to simulate Newtonian and non-Newtonian flows with free surfaces. The basic equations solved are the incompressible mass conservation and Navier-Stokes equations. The method uses prediction-correction fractional steps with the temporal velocity field integrated forward in time without enforcing incompressibility in the prediction step. The resulting deviation of particle density is then implicitly projected onto a divergence-free space to satisfy incompressibility through a pressure Poisson equation derived from an approximate pressure projection. Various SPH formulations are employed in the discretization of the relevant gradient, divergence and Laplacian terms. Free surfaces are identified by the particles whose density is below a set point. Wall boundaries are represented by particles whose positions are fixed. The SPH formulation is also extended to non-Newtonian flows and demonstrated using the Cross rheological model. The incompressible SPH method is tested by typical 2-D dam-break problems in which both water and fluid mud are considered. The computations are in good agreement with available experimental data. The different flow features between Newtonian and non-Newtonian flows after the dam-break are discussed.

  19. A multiscale SPH particle model of the near-wall dynamics of leukocytes in flow.

    PubMed

    Gholami, Babak; Comerford, Andrew; Ellero, Marco

    2014-01-01

    A novel multiscale Lagrangian particle solver based on SPH is developed with the intended application of leukocyte transport in large arteries. In such arteries, the transport of leukocytes and red blood cells can be divided into two distinct regions: the bulk flow and the near-wall region. In the bulk flow, the transport can be modeled on a continuum basis as the transport of passive scalar concentrations. Whereas in the near-wall region, specific particle tracking of the leukocytes is required and lubrication forces need to be separately taken into account. Because of large separation of spatio-temporal scales involved in the problem, simulations of red blood cells and leukocytes are handled separately. In order to take the exchange of leukocytes between the bulk fluid and the near-wall region into account, solutions are communicated through coupling of conserved quantities at the interface between these regions. Because the particle tracking is limited to those leukocytes lying in the near-wall region only, our approach brings considerable speedup to the simulation of leukocyte circulation in a test geometry of a backward-facing step, which encompasses many flow features observed in vivo. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods

    NASA Astrophysics Data System (ADS)

    Park, Brian T.; Petrosian, Vahe

    1996-03-01

    Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.

  1. Quantitative Description of Crystal Nucleation and Growth from in Situ Liquid Scanning Transmission Electron Microscopy.

    PubMed

    Ievlev, Anton V; Jesse, Stephen; Cochell, Thomas J; Unocic, Raymond R; Protopopescu, Vladimir A; Kalinin, Sergei V

    2015-12-22

    Recent advances in liquid cell (scanning) transmission electron microscopy (S)TEM has enabled in situ nanoscale investigations of controlled nanocrystal growth mechanisms. Here, we experimentally and quantitatively investigated the nucleation and growth mechanisms of Pt nanostructures from an aqueous solution of K2PtCl6. Averaged statistical, network, and local approaches have been used for the data analysis and the description of both collective particles dynamics and local growth features. In particular, interaction between neighboring particles has been revealed and attributed to reduction of the platinum concentration in the vicinity of the particle boundary. The local approach for solving the inverse problem showed that particles dynamics can be simulated by a stationary diffusional model. The obtained results are important for understanding nanocrystal formation and growth processes and for optimization of synthesis conditions.

  2. Particle Swarm Optimization with Double Learning Patterns

    PubMed Central

    Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian

    2016-01-01

    Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747

  3. Multi-Target State Extraction for the SMC-PHD Filter

    PubMed Central

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-01-01

    The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274

  4. Continuous-feed optical sorting of aerosol particles

    PubMed Central

    Curry, J. J.; Levine, Zachary H.

    2016-01-01

    We consider the problem of sorting, by size, spherical particles of order 100 nm radius. The scheme we analyze consists of a heterogeneous stream of spherical particles flowing at an oblique angle across an optical Gaussian mode standing wave. Sorting is achieved by the combined spatial and size dependencies of the optical force. Particles of all sizes enter the flow at a point, but exit at different locations depending on size. Exiting particles may be detected optically or separated for further processing. The scheme has the advantages of accommodating a high throughput, producing a continuous stream of continuously dispersed particles, and exhibiting excellent size resolution. We performed detailed Monte Carlo simulations of particle trajectories through the optical field under the influence of convective air flow. We also developed a method for deriving effective velocities and diffusion constants from the Fokker-Planck equation that can generate equivalent results much more quickly. With an optical wavelength of 1064 nm, polystyrene particles with radii in the neighborhood of 275 nm, for which the optical force vanishes, may be sorted with a resolution below 1 nm. PMID:27410570

  5. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.

    2017-06-19

    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less

  6. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Rodney O.; Passalacqua, Alberto

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less

  7. One-dimensional gravity in infinite point distributions.

    PubMed

    Gabrielli, A; Joyce, M; Sicard, F

    2009-10-01

    The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by "Jeans swindle" for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from "shuffled lattice" initial conditions. These show qualitative properties of the evolution (notably its "self-similarity") like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.

  8. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  9. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  10. Fouling reduction characteristics of a no-distributor-fluidized-bed heat exchanger for flue gas heat recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun, Y.D.; Lee, K.B.; Islam, S.Z.

    2008-07-01

    In conventional flue gas heat recovery systems, the fouling by fly ashes and the related problems such as corrosion and cleaning are known to be major drawbacks. To overcome these problems, a single-riser no-distributor-fluidized-bed heat exchanger is devised and studied. Fouling and cleaning tests are performed for a uniquely designed fluidized bed-type heat exchanger to demonstrate the effect of particles on the fouling reduction and heat transfer enhancement. The tested heat exchanger model (1 m high and 54 mm internal diameter) is a gas-to-water type and composed of a main vertical tube and four auxiliary tubes through which particles circulatemore » and transfer heat. Through the present study, the fouling on the heat transfer surface could successfully be simulated by controlling air-to-fuel ratios rather than introducing particles through an external feeder, which produced soft deposit layers with 1 to 1.5 mm thickness on the inside pipe wall. Flue gas temperature at the inlet of heat exchanger was maintained at 450{sup o}C at the gas volume rate of 0.738 to 0.768 CMM (0.0123 to 0.0128 m{sup 3}/sec). From the analyses of the measured data, heat transfer performances of the heat exchanger before and after fouling and with and without particles were evaluated. Results showed that soft deposits were easily removed by introducing glass bead particles, and also heat transfer performance increased two times by the particle circulation. In addition, it was found that this type of heat exchanger had high potential to recover heat of waste gases from furnaces, boilers, and incinerators effectively and to reduce fouling related problems.« less

  11. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  12. GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

    NASA Astrophysics Data System (ADS)

    N-Body Shop

    2017-10-01

    Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

  13. The full two-body-problem: Simulation, analysis, and application to the dynamics, characteristics, and evolution of binary asteroid systems

    NASA Astrophysics Data System (ADS)

    Fahnestock, Eugene Gregory

    The Full Two-Body-Problem (F2BP) describes the dynamics of two unconstrained rigid bodies in close proximity, having arbitrary spatial distribution of mass, charge, or similar field quantity, and interacting through a mutual potential dependent on that distribution. While the F2BP has applications in areas as wide ranging as molecular dynamics to satellite formation flying, this dissertation focuses on its application to natural bodies in space with nontrivial mass distribution interacting through mutual gravitational potential, i.e. binary asteroids. This dissertation first describes further development and implementation of methods for accurate and efficient F2BP propagation based upon a flexible method for computing the mutual potential between bodies modeled as homogenous polyhedra. Next application of these numerical tools to the study of binary asteroid (66391) 1999 KW4 is summarized. This system typifies the largest class of NEO binaries, which includes nearly half of them, characterized by a roughly oblate spheroid primary rotating rapidly and roughly triaxial ellipsoid secondary in on-average synchronous rotation. Thus KW4's dynamics generalize to any member of that class. Analytical formulae are developed which separately describe the effects of primary oblateness and secondary triaxial ellipsoid shape on frequencies of system motions revealed through the F2BP simulation. These formulae are useful for estimating inertia elements and highest-level internal mass distributions of bodies in any similar system, simply from standoff observation of these motion frequencies. Finally precise dynamical simulation and analysis of the motion of test particles within the time-varying gravity field of the F2BP system is detailed. This Restricted Full-detail Three-Body-Problem encompasses exploration of three types of particle motion within a binary asteroid: (1) Orbital motion such as that for a spacecraft flying within the system about the primary, secondary, or system barycenter at large distance; (2) Motion of ejecta particles originating from the body surfaces with substantial initial surface-relative velocity; (3) Motion of particles originating from the primary surface near the equator, with no initial surface-relative velocity, but when primary spin rate is raised past the "disruption spin rate" for which material on the surface will be spun off.

  14. Combined state and parameter identification of nonlinear structural dynamical systems based on Rao-Blackwellization and Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Abhinav, S.; Manohar, C. S.

    2018-03-01

    The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.

  15. Numerical simulation of runaway electrons: 3-D effects on synchrotron radiation and impurity-based runaway current dissipation

    NASA Astrophysics Data System (ADS)

    del-Castillo-Negrete, D.; Carbajal, L.; Spong, D.; Izzo, V.

    2018-05-01

    Numerical simulations of runaway electrons (REs) with a particular emphasis on orbit dependent effects in 3-D magnetic fields are presented. The simulations were performed using the recently developed Kinetic Orbit Runaway electron Code (KORC) that computes the full-orbit relativistic dynamics in prescribed electric and magnetic fields including radiation damping and collisions. The two main problems of interest are synchrotron radiation and impurity-based RE dissipation. Synchrotron radiation is studied in axisymmetric fields and in 3-D magnetic configurations exhibiting magnetic islands and stochasticity. For passing particles in axisymmetric fields, neglecting orbit effects might underestimate or overestimate the total radiation power depending on the direction of the radial shift of the drift orbits. For trapped particles, the spatial distribution of synchrotron radiation exhibits localized "hot" spots at the tips of the banana orbits. In general, the radiation power per particle for trapped particles is higher than the power emitted by passing particles. The spatial distribution of synchrotron radiation in stochastic magnetic fields, obtained using the MHD code NIMROD, is strongly influenced by the presence of magnetic islands. 3-D magnetic fields also introduce a toroidal dependence on the synchrotron spectra, and neglecting orbit effects underestimates the total radiation power. In the presence of magnetic islands, the radiation damping of trapped particles is larger than the radiation damping of passing particles. Results modeling synchrotron emission by RE in DIII-D quiescent plasmas are also presented. The computation uses EFIT reconstructed magnetic fields and RE energy distributions fitted to the experimental measurements. Qualitative agreement is observed between the numerical simulations and the experiments for simplified RE pitch angle distributions. However, it is noted that to achieve quantitative agreement, it is necessary to use pitch angle distributions that depart from simplified 2-D Fokker-Planck equilibria. Finally, using the guiding center orbit model (KORC-GC), a preliminary study of pellet mitigated discharges in DIII-D is presented. The dependence of RE energy decay and current dissipation on initial energy and ionization levels of neon impurities is studied. The computed decay rates are within the range of experimental observations.

  16. A prospectus on kinetic heliophysics

    NASA Astrophysics Data System (ADS)

    Howes, Gregory G.

    2017-05-01

    Under the low density and high temperature conditions typical of heliospheric plasmas, the macroscopic evolution of the heliosphere is strongly affected by the kinetic plasma physics governing fundamental microphysical mechanisms. Kinetic turbulence, collisionless magnetic reconnection, particle acceleration, and kinetic instabilities are four poorly understood, grand-challenge problems that lie at the new frontier of kinetic heliophysics. The increasing availability of high cadence and high phase-space resolution measurements of particle velocity distributions by current and upcoming spacecraft missions and of massively parallel nonlinear kinetic simulations of weakly collisional heliospheric plasmas provides the opportunity to transform our understanding of these kinetic mechanisms through the full utilization of the information contained in the particle velocity distributions. Several major considerations for future investigations of kinetic heliophysics are examined. Turbulent dissipation followed by particle heating is highlighted as an inherently two-step process in weakly collisional plasmas, distinct from the more familiar case in fluid theory. Concerted efforts must be made to tackle the big-data challenge of visualizing the high-dimensional (3D-3V) phase space of kinetic plasma theory through physics-based reductions. Furthermore, the development of innovative analysis methods that utilize full velocity-space measurements, such as the field-particle correlation technique, will enable us to gain deeper insight into these four grand-challenge problems of kinetic heliophysics. A systems approach to tackle the multi-scale problem of heliophysics through a rigorous connection between the kinetic physics at microscales and the self-consistent evolution of the heliosphere at macroscales will propel the field of kinetic heliophysics into the future.

  17. A prospectus on kinetic heliophysics

    PubMed Central

    2017-01-01

    Under the low density and high temperature conditions typical of heliospheric plasmas, the macroscopic evolution of the heliosphere is strongly affected by the kinetic plasma physics governing fundamental microphysical mechanisms. Kinetic turbulence, collisionless magnetic reconnection, particle acceleration, and kinetic instabilities are four poorly understood, grand-challenge problems that lie at the new frontier of kinetic heliophysics. The increasing availability of high cadence and high phase-space resolution measurements of particle velocity distributions by current and upcoming spacecraft missions and of massively parallel nonlinear kinetic simulations of weakly collisional heliospheric plasmas provides the opportunity to transform our understanding of these kinetic mechanisms through the full utilization of the information contained in the particle velocity distributions. Several major considerations for future investigations of kinetic heliophysics are examined. Turbulent dissipation followed by particle heating is highlighted as an inherently two-step process in weakly collisional plasmas, distinct from the more familiar case in fluid theory. Concerted efforts must be made to tackle the big-data challenge of visualizing the high-dimensional (3D-3V) phase space of kinetic plasma theory through physics-based reductions. Furthermore, the development of innovative analysis methods that utilize full velocity-space measurements, such as the field-particle correlation technique, will enable us to gain deeper insight into these four grand-challenge problems of kinetic heliophysics. A systems approach to tackle the multi-scale problem of heliophysics through a rigorous connection between the kinetic physics at microscales and the self-consistent evolution of the heliosphere at macroscales will propel the field of kinetic heliophysics into the future. PMID:29104421

  18. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  19. The giant impact produced a precipitated Moon

    NASA Astrophysics Data System (ADS)

    Cameron, A. G. W.

    1993-03-01

    The author's current simulations of Giant Impacts on the protoearth show the development of large hot rock vapor atmospheres. The Balbus-Hawley mechanism will pump mass and angular momentum outwards in the equatorial plane; upon cooling and expansion the rock vapor will condense refractory material beyond the Roche distance, where it is available for lunar formation. During the last seven years, the author together with several colleagues has carried out a series of numerical investigations of the Giant Impact theory for the origin of the Moon. These involved three-dimensional simulations of the impact and its aftermath using Smooth Particle Hydrodynamics (SPH), in which the matter in the system is divided into discrete particles whose motions and internal energies are determined as a result of the imposed initial conditions. Densities and pressures are determined from the combined overlaps of the particles, which have a bell-shaped density distribution characterized by a smoothing length. In the original series of runs all particle masses and smoothing lengths had the same values; the matter in the colliding bodies consisted of initial iron cores and rock (dunite) mantles. Each of 41 runs used 3,008 particles, took several weeks of continuous computation, and gave fairly good representations of the ultimate state of the post-collision body or bodies but at best crude and qualitative information about individual particles in orbit. During the last two years an improved SPH program was used in which the masses and smoothing lengths of the particles are variable, and the intent of the current series of computations is to investigate the behavior of the matter exterior to the main parts of the body or bodies subsequent to the collisions. These runs are taking times comparable to a year of continuous computation in each case; they use 10,000 particles with 5,000 particles in the target and 5,000 in the impactor, and the particles thus have variable masses and smoothing lengths (the latter are dynamically adjusted so that a particle typically overlaps a few tens of its neighbors). Since the matter in the impactor provides the majority of the mass left in orbit after the collision, and since the masses of the particles that originated in the impactor are smaller than those in the target, the mass resolution in the exterior parts of the problem is greatly improved and the exterior particles properly simulate atmospheres in hydrostatic equilibrium.

  20. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  1. Use of Hilbert Curves in Parallelized CUDA code: Interaction of Interstellar Atoms with the Heliosphere

    NASA Astrophysics Data System (ADS)

    Destefano, Anthony; Heerikhuisen, Jacob

    2015-04-01

    Fully 3D particle simulations can be a computationally and memory expensive task, especially when high resolution grid cells are required. The problem becomes further complicated when parallelization is needed. In this work we focus on computational methods to solve these difficulties. Hilbert curves are used to map the 3D particle space to the 1D contiguous memory space. This method of organization allows for minimized cache misses on the GPU as well as a sorted structure that is equivalent to an octal tree data structure. This type of sorted structure is attractive for uses in adaptive mesh implementations due to the logarithm search time. Implementations using the Message Passing Interface (MPI) library and NVIDIA's parallel computing platform CUDA will be compared, as MPI is commonly used on server nodes with many CPU's. We will also compare static grid structures with those of adaptive mesh structures. The physical test bed will be simulating heavy interstellar atoms interacting with a background plasma, the heliosphere, simulated from fully consistent coupled MHD/kinetic particle code. It is known that charge exchange is an important factor in space plasmas, specifically it modifies the structure of the heliosphere itself. We would like to thank the Alabama Supercomputer Authority for the use of their computational resources.

  2. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.

  3. Motion of dust particles in nonuniform magnetic field and applicability of smoothed particle hydrodynamics simulation

    NASA Astrophysics Data System (ADS)

    Saitou, Y.

    2018-01-01

    An SPH (Smoothed Particle Hydrodynamics) simulation code is developed to reproduce our findings on behavior of dust particles, which were obtained in our previous experiments (Phys. Plasmas, 23, 013709 (2016) and Abst. 18th Intern. Cong. Plasma Phys. (Kaohsiung, 2016)). Usually, in an SPH simulation, a smoothed particle is interpreted as a discretized fluid element. Here we regard the particles as dust particles because it is known that behavior of dust particles in complex plasmas can be described using fluid dynamics equations in many cases. Various rotation velocities that are difficult to achieve in the experiment are given to particles at boundaries in the newly developed simulation and motion of particles is investigated. Preliminary results obtained by the simulation are shown.

  4. PIC Simulations of Hypersonic Plasma Instabilities

    NASA Astrophysics Data System (ADS)

    Niehoff, D.; Ashour-Abdalla, M.; Niemann, C.; Decyk, V.; Schriver, D.; Clark, E.

    2013-12-01

    The plasma sheaths formed around hypersonic aircraft (Mach number, M > 10) are relatively unexplored and of interest today to both further the development of new technologies and solve long-standing engineering problems. Both laboratory experiments and analytical/numerical modeling are required to advance the understanding of these systems; it is advantageous to perform these tasks in tandem. There has already been some work done to study these plasmas by experiments that create a rapidly expanding plasma through ablation of a target with a laser. In combination with a preformed magnetic field, this configuration leads to a magnetic "bubble" formed behind the front as particles travel at about Mach 30 away from the target. Furthermore, the experiment was able to show the generation of fast electrons which could be due to instabilities on electron scales. To explore this, future experiments will have more accurate diagnostics capable of observing time- and length-scales below typical ion scales, but simulations are a useful tool to explore these plasma conditions theoretically. Particle in Cell (PIC) simulations are necessary when phenomena are expected to be observed at these scales, and also have the advantage of being fully kinetic with no fluid approximations. However, if the scales of the problem are not significantly below the ion scales, then the initialization of the PIC simulation must be very carefully engineered to avoid unnecessary computation and to select the minimum window where structures of interest can be studied. One method of doing this is to seed the simulation with either experiment or ion-scale simulation results. Previous experiments suggest that a useful configuration for studying hypersonic plasma configurations is a ring of particles rapidly expanding transverse to an external magnetic field, which has been simulated on the ion scale with an ion-hybrid code. This suggests that the PIC simulation should have an equivalent configuration; however, modeling a plasma expanding radially in every direction is computationally expensive. In order to reduce the computational expense, we use a radial density profile from the hybrid simulation results to seed a self-consistent PIC simulation in one direction (x), while creating a current in the direction (y) transverse to both the drift velocity and the magnetic field (z) to create the magnetic bubble observed in experiment. The simulation will be run in two spatial dimensions but retain three velocity dimensions, and the results will be used to explore the growth of micro-instabilities present in hypersonic plasmas in the high-density region as it moves through the simulation box. This will still require a significantly large box in order to compare with experiment, as the experiments are being performed over distances of 104 λDe and durations of 105 ωpe-1.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Na; Zhang, Peng; Kang, Wei

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less

  6. SPH with dynamical smoothing length adjustment based on the local flow kinematics

    NASA Astrophysics Data System (ADS)

    Olejnik, Michał; Szewc, Kamil; Pozorski, Jacek

    2017-11-01

    Due to the Lagrangian nature of Smoothed Particle Hydrodynamics (SPH), the adaptive resolution remains a challenging task. In this work, we first analyse the influence of the simulation parameters and the smoothing length on solution accuracy, in particular in high strain regions. Based on this analysis we develop a novel approach to dynamically adjust the kernel range for each SPH particle separately, accounting for the local flow kinematics. We use the Okubo-Weiss parameter that distinguishes the strain and vorticity dominated regions in the flow domain. The proposed development is relatively simple and implies only a moderate computational overhead. We validate the modified SPH algorithm for a selection of two-dimensional test cases: the Taylor-Green flow, the vortex spin-down, the lid-driven cavity and the dam-break flow against a sharp-edged obstacle. The simulation results show good agreement with the reference data and improvement of the long-term accuracy for unsteady flows. For the lid-driven cavity case, the proposed dynamical adjustment remedies the problem of tensile instability (particle clustering).

  7. Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.

    PubMed

    Marquez-Lago, Tatiana T; Burrage, Kevin

    2007-09-14

    In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.

  8. Problems in obtaining perfect images by single-particle electron cryomicroscopy of biological structures in amorphous ice.

    PubMed

    Henderson, Richard; McMullan, Greg

    2013-02-01

    Theoretical considerations together with simulations of single-particle electron cryomicroscopy images of biological assemblies in ice demonstrate that atomic structures should be obtainable from images of a few thousand asymmetric units, provided the molecular weight of the whole assembly being studied is greater than the minimum needed for accurate position and orientation determination. However, with present methods of specimen preparation and current microscope and detector technologies, many more particles are needed, and the alignment of smaller assemblies is difficult or impossible. Only larger structures, with enough signal to allow good orientation determination and with enough images to allow averaging of many hundreds of thousands or even millions of asymmetric units, have successfully produced high-resolution maps. In this review, we compare the contrast of experimental electron cryomicroscopy images of two smaller molecular assemblies, namely apoferritin and beta-galactosidase, with that expected from perfect simulated images calculated from their known X-ray structures. We show that the contrast and signal-to-noise ratio of experimental images still require significant improvement before it will be possible to realize the full potential of single-particle electron cryomicroscopy. In particular, although reasonably good orientations can be obtained for beta-galactosidase, we have been unable to obtain reliable orientation determination from experimental images of apoferritin. Simulations suggest that at least 2-fold improvement of the contrast in experimental images at ~10 Å resolution is needed and should be possible.

  9. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.

  10. Numerical study of suspensions of deformable particles.

    NASA Astrophysics Data System (ADS)

    Brandt, Luca; Rosti, Marco Edoardo

    2017-11-01

    We consider a model non-Newtonian fluid consisting of a suspension of deformable particles in a Newtonian solvent. Einstein showed in his pioneering work that the relative increase in effective viscosity is a linear function of the particle volume fraction for dilute suspensions of rigid particles. Inertia has been shown to introduce deviations from the behaviour predicted by the different empirical fits, an effect that can be related to an increase of the effective volume fraction. We here focus on the effect of elasticity, i.e. visco-elastic deformable particles. To tackle the problem at hand, we perform three-dimensional Direct Numerical Simulation of a plane Couette flow with a suspension of neutrally buoyant deformable viscous hyper-elastic particles. We show that elasticity produces a shear-thinning effect in elastic suspensions (in comparison to rigid ones) and that it can be understood in terms of a reduction of the effective volume fraction of the suspension. The deformation modifies the particle motion reducing the level of mutual interaction. Normal stress differences will also be considered. European Research Council, Grant No. ERC-2013-CoG- 616186, TRITOS; SNIC (the Swedish National Infrastructure for Computing).

  11. A Multi-Fidelity Surrogate Model for Handling Real Gas Equations of State

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S."bala"

    2016-11-01

    The explosive dispersal of particles is an example of a complex multiphase and multi-species fluid flow problem. This problem has many engineering applications including particle-laden explosives. In these flows, the detonation products of the explosive cannot be treated as a perfect gas so a real gas equation of state is used to close the governing equations (unlike air, which uses the ideal gas equation for closure). As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both of the state equations must be satisfied. One of the more accurate, yet computationally expensive, methods to deal with this is a scheme that iterates between the two equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work strives to create a multi-fidelity surrogate model of this process. We then study the performance of the model with respect to the iterative method by performing both gas-only and particle laden flow simulations using an Eulerian-Lagrangian approach with a finite volume code. Specifically, the model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel modeling approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.

  12. Analysis of orbital perturbations acting on objects in orbits near geosynchronous earth orbit

    NASA Technical Reports Server (NTRS)

    Friesen, Larry J.; Jackson, Albert A., IV; Zook, Herbert A.; Kessler, Donald J.

    1992-01-01

    The paper presents a numerical investigation of orbital evolution for objects started in GEO or in orbits near GEO in order to study potential orbital debris problems in this region. Perturbations simulated include nonspherical terms in the earth's geopotential field, lunar and solar gravity, and solar radiation pressure. Objects simulated include large satellites, for which solar radiation pressure is insignificant, and small particles, for which solar radiation pressure is an important force. Results for large satellites are largely in agreement with previous GEO studies that used classical perturbation techniques. The orbit plane of GEO satellites placed in a stable plane orbit inclined approximately 7.3 deg to the equator experience very little precession, remaining always within 1.2 percent of their initial orientation. Solar radiation pressure generates two major effects on small particles: an orbital eccentricity oscillation anticipated from previous research, and an oscillation in orbital inclination.

  13. U-238 fission and Pu-239 production in subcritical assembly

    NASA Astrophysics Data System (ADS)

    Grab, Magdalena; Wojciechowski, Andrzej

    2018-04-01

    The project touches upon an issue of U-238 fission reactions and Pu-239 production reactions in subcritical assembly. The experiment took place in November 2014 at the Dzhelepov Laboratory of Nuclear Problems (JINR, Dubna) using PHASOTRON.Data of this experiment were analyzed in Laboratory of Information Technologies (LIT). Four MCNPX models were considered for simulation: Bertini/Dresnen, Bertini/Abla, INCL4/Drensnen, INCL4/Abla. The main goal of the project was to compare the experimental data and simulation results. We obtain a good agreement of experimental data and computation results especially for detectors placed besides the assembly axis. In addition, the U-238 fission reactions are more probable to be observed in the region of a higher particle energy spectrum, located closer to the assembly axis and the particle beam as well and vice versa Pu-239 production reactions were dominant in the peripheral region of geometry.

  14. SWIFT: SPH With Inter-dependent Fine-grained Tasking

    NASA Astrophysics Data System (ADS)

    Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.

    2018-05-01

    SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.

  15. Simulation of ultra-high energy photon propagation in the geomagnetic field

    NASA Astrophysics Data System (ADS)

    Homola, P.; Góra, D.; Heck, D.; Klages, H.; PeĶala, J.; Risse, M.; Wilczyńska, B.; Wilczyński, H.

    2005-12-01

    The identification of primary photons or specifying stringent limits on the photon flux is of major importance for understanding the origin of ultra-high energy (UHE) cosmic rays. UHE photons can initiate particle cascades in the geomagnetic field, which leads to significant changes in the subsequent atmospheric shower development. We present a Monte Carlo program allowing detailed studies of conversion and cascading of UHE photons in the geomagnetic field. The program named PRESHOWER can be used both as an independent tool or together with a shower simulation code. With the stand-alone version of the code it is possible to investigate various properties of the particle cascade induced by UHE photons interacting in the Earth's magnetic field before entering the Earth's atmosphere. Combining this program with an extensive air shower simulation code such as CORSIKA offers the possibility of investigating signatures of photon-initiated showers. In particular, features can be studied that help to discern such showers from the ones induced by hadrons. As an illustration, calculations for the conditions of the southern part of the Pierre Auger Observatory are presented. Catalogue identifier:ADWG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG Program obtainable: CPC Program Library, Quen's University of Belfast, N. Ireland Computer on which the program has been thoroughly tested:Intel-Pentium based PC Operating system:Linux, DEC-Unix Programming language used:C, FORTRAN 77 Memory required to execute with typical data:<100 kB No. of bits in a word:32 Has the code been vectorized?:no Number of lines in distributed program, including test data, etc.:2567 Number of bytes in distributed program, including test data, etc.:25 690 Distribution format:tar.gz Other procedures used in PRESHOWER:IGRF [N.A. Tsyganenko, National Space Science Data Center, NASA GSFC, Greenbelt, MD 20771, USA, http://nssdc.gsfc.nasa.gov/space/model/magnetos/data-based/geopack.html], bessik, ran2 [Numerical Recipes, http://www.nr.com]. Nature of the physical problem:Simulation of a cascade of particles initiated by UHE photon passing through the geomagnetic field above the Earth's atmosphere. Method of solution: The primary photon is tracked until its conversion into ee pair or until it reaches the upper atmosphere. If conversion occurred each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). The procedure ends at the top of atmosphere and the shower particle data are saved. Restrictions on the complexity of the problem: Gamma conversion into particles other than electron pair has not been taken into account. Typical running time: 100 preshower events with primary energy 10 eV require a 800 MHz CPU time of about 50 min, with 10 eV the simulation time for 100 events grows up to 500 min.

  16. Microstructural characterization of random packings of cubic particles

    PubMed Central

    Malmir, Hessam; Sahimi, Muhammad; Tabar, M. Reza Rahimi

    2016-01-01

    Understanding the properties of random packings of solid objects is of critical importance to a wide variety of fundamental scientific and practical problems. The great majority of the previous works focused, however, on packings of spherical and sphere-like particles. We report the first detailed simulation and characterization of packings of non-overlapping cubic particles. Such packings arise in a variety of problems, ranging from biological materials, to colloids and fabrication of porous scaffolds using salt powders. In addition, packing of cubic salt crystals arise in various problems involving preservation of pavements, paintings, and historical monuments, mineral-fluid interactions, CO2 sequestration in rock, and intrusion of groundwater aquifers by saline water. Not much is known, however, about the structure and statistical descriptors of such packings. We have developed a version of the random sequential addition algorithm to generate such packings, and have computed a variety of microstructural descriptors, including the radial distribution function, two-point probability function, orientational correlation function, specific surface, and mean chord length, and have studied the effect of finite system size and porosity on such characteristics. The results indicate the existence of both spatial and orientational long-range order in the packing, which is more distinctive for higher packing densities. The maximum packing fraction is about 0.57. PMID:27725736

  17. Microstructural characterization of random packings of cubic particles

    DOE PAGES

    Malmir, Hessam; Sahimi, Muhammad; Tabar, M. Reza Rahimi

    2016-10-11

    Understanding the properties of random packings of solid objects is of critical importance to a wide variety of fundamental scientific and practical problems. The great majority of the previous works focused, however, on packings of spherical and sphere-like particles. We report the first detailed simulation and characterization of packings of non-overlapping cubic particles. Such packings arise in a variety of problems, ranging from biological materials, to colloids and fabrication of porous scaffolds using salt powders. In addition, packing of cubic salt crystals arise in various problems involving preservation of pavements, paintings, and historical monuments, mineral-fluid interactions, CO 2 sequestration inmore » rock, and intrusion of groundwater aquifers by saline water. Not much is known, however, about the structure and statistical descriptors of such packings. We have developed a version of the random sequential addition algorithm to generate such packings, and have computed a variety of microstructural descriptors, including the radial distribution function, two-point probability function, orientational correlation function, specific surface, and mean chord length, and have studied the effect of finite system size and porosity on such characteristics. Here, the results indicate the existence of both spatial and orientational long-range order in the packing, which is more distinctive for higher packing densities.« less

  18. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  19. Particle sedimentation in a sheared viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Murch, William L.; Krishnan, Sreenath; Shaqfeh, Eric S. G.; Iaccarino, Gianluca

    2017-11-01

    Particle suspensions are ubiquitous in engineered processes, biological systems, and natural settings. For an engineering application - whether the intent is to suspend and transport particles (e.g., in hydraulic fracturing fluids) or allow particles to sediment (e.g., in industrial separations processes) - understanding and prediction of the particle mobility is critical. This task is often made challenging by the complex nature of the fluid phase, for example, due to fluid viscoelasticity. In this talk, we focus on a fully 3D flow problem in a viscoelastic fluid: a settling particle with a shear flow applied in the plane perpendicular to gravity (referred to as orthogonal shear). Previously, it has been shown that an orthogonal shear flow can reduce the settling rate of particles in viscoelastic fluids. Using experiments and numerical simulations across a wide range of sedimentation and shear Weissenberg number, this talk will address the underlying physical mechanism responsible for the additional drag experienced by a rigid sphere settling in a confined viscoelastic fluid with orthogonal shear. We will then explore multiple particle effects, and discuss the implications and extensions of this work for particle suspensions. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747 (WLM).

  20. Crystal collimator systems for high energy frontier

    NASA Astrophysics Data System (ADS)

    Sytov, A. I.; Tikhomirov, V. V.; Lobko, A. S.

    2017-07-01

    Crystalline collimators can potentially considerably improve the cleaning performance of the presently used collimator systems using amorphous collimators. A crystal-based collimation scheme which relies on the channeling particle deflection in bent crystals has been proposed and extensively studied both theoretically and experimentally. However, since the efficiency of particle capture into the channeling regime does not exceed ninety percent, this collimation scheme partly suffers from the same leakage problems as the schemes using amorphous collimators. To improve further the cleaning efficiency of the crystal-based collimation system to meet the requirements of the FCC, we suggest here a double crystal-based collimation scheme, to which the second crystal is introduced to enhance the deflection of the particles escaping the capture to the channeling regime in its first crystal. The application of the effect of multiple volume reflection in one bent crystal and of the same in a sequence of crystals is simulated and compared for different crystal numbers and materials at the energy of 50 TeV. To enhance also the efficiency of use of the first crystal of the suggested double crystal-based scheme, we propose: the method of increase of the probability of particle capture into the channeling regime at the first crystal passage by means of fabrication of a crystal cut and the method of the amplification of nonchanneled particle deflection through the multiple volume reflection in one bent crystal, accompanying the particle channeling by a skew plane. We simulate both of these methods for the 50 TeV FCC energy.

  1. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em

    2015-09-01

    Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM).

  2. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Yu-Hang, E-mail: yuhang_tang@brown.edu; Kudo, Shuhei, E-mail: shuhei-kudo@outlook.jp; Bian, Xin, E-mail: xin_bian@brown.edu

    2015-09-15

    Graphical abstract: - Abstract: Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create anmore » easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM)« less

  3. Physical foundation of the fluid particle dynamics method for colloid dynamics simulation.

    PubMed

    Furukawa, Akira; Tateno, Michio; Tanaka, Hajime

    2018-05-16

    Colloid dynamics is significantly influenced by many-body hydrodynamic interactions mediated by a suspending fluid. However, theoretical and numerical treatments of such interactions are extremely difficult. To overcome this situation, we developed a fluid particle dynamics (FPD) method [H. Tanaka and T. Araki, Phys. Rev. Lett., 2000, 35, 3523], which is based on two key approximations: (i) a colloidal particle is treated as a highly viscous particle and (ii) the viscosity profile is described by a smooth interfacial profile function. Approximation (i) makes our method free from the solid-fluid boundary condition, significantly simplifying the treatment of many-body hydrodynamic interactions while satisfying the incompressible condition without the Stokes approximation. Approximation (ii) allows us to incorporate an extra degree of freedom in a fluid, e.g., orientational order and concentration, as an additional field variable. Here, we consider two fundamental problems associated with these approximations. One is the introduction of thermal noise and the other is the incorporation of coupling of the colloid surface with an order parameter introduced into a fluid component, which is crucial when considering colloidal particles suspended in a complex fluid. Here, we show that our FPD method makes it possible to simulate colloid dynamics properly while including full hydrodynamic interactions, inertia effects, incompressibility, thermal noise, and additional degrees of freedom of a fluid, which may be relevant for wide applications in colloidal and soft matter science.

  4. Dynamic simulations of geologic materials using combined FEM/DEM/SPH analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, J P; Johnson, S M

    2008-03-26

    An overview of the Lawrence Discrete Element Code (LDEC) is presented, and results from a study investigating the effect of explosive and impact loading on geologic materials using the Livermore Distinct Element Code (LDEC) are detailed. LDEC was initially developed to simulate tunnels and other structures in jointed rock masses using large numbers of polyhedral blocks. Many geophysical applications, such as projectile penetration into rock, concrete targets, and boulder fields, require a combination of continuum and discrete methods in order to predict the formation and interaction of the fragments produced. In an effort to model this class of problems, LDECmore » now includes implementations of Cosserat point theory and cohesive elements. This approach directly simulates the transition from continuum to discontinuum behavior, thereby allowing for dynamic fracture within a combined finite element/discrete element framework. In addition, there are many application involving geologic materials where fluid-structure interaction is important. To facilitate solution of this class of problems a Smooth Particle Hydrodynamics (SPH) capability has been incorporated into LDEC to simulate fully coupled systems involving geologic materials and a saturating fluid. We will present results from a study of a broad range of geomechanical problems that exercise the various components of LDEC in isolation and in tandem.« less

  5. BHR equations re-derived with immiscible particle effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, John Dennis; Horwitz, Jeremy A.

    2015-05-01

    Compressible and variable density turbulent flows with dispersed phase effects are found in many applications ranging from combustion to cloud formation. These types of flows are among the most challenging to simulate. While the exact equations governing a system of particles and fluid are known, computational resources limit the scale and detail that can be simulated in this type of problem. Therefore, a common method is to simulate averaged versions of the flow equations, which still capture salient physics and is relatively less computationally expensive. Besnard developed such a model for variable density miscible turbulence, where ensemble-averaging was applied tomore » the flow equations to yield a set of filtered equations. Besnard further derived transport equations for the Reynolds stresses, the turbulent mass flux, and the density-specific volume covariance, to help close the filtered momentum and continuity equations. We re-derive the exact BHR closure equations which include integral terms owing to immiscible effects. Physical interpretations of the additional terms are proposed along with simple models. The goal of this work is to extend the BHR model to allow for the simulation of turbulent flows where an immiscible dispersed phase is non-trivially coupled with the carrier phase.« less

  6. Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees

    PubMed Central

    Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael

    2014-01-01

    Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210

  7. Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics

    PubMed Central

    Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong

    2015-01-01

    We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380

  8. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  9. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  10. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  11. Mathematical inference in one point microrheology

    NASA Astrophysics Data System (ADS)

    Hohenegger, Christel; McKinley, Scott

    2016-11-01

    Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.

  12. Direct Large-Scale N-Body Simulations of Planetesimal Dynamics

    NASA Astrophysics Data System (ADS)

    Richardson, Derek C.; Quinn, Thomas; Stadel, Joachim; Lake, George

    2000-01-01

    We describe a new direct numerical method for simulating planetesimal dynamics in which N˜10 6 or more bodies can be evolved simultaneously in three spatial dimensions over hundreds of dynamical times. This represents several orders of magnitude improvement in resolution over previous studies. The advance is made possible through modification of a stable and tested cosmological code optimized for massively parallel computers. However, owing to the excellent scalability and portability of the code, modest clusters of workstations can treat problems with N˜10 5 particles in a practical fashion. The code features algorithms for detection and resolution of collisions and takes into account the strong central force field and flattened Keplerian disk geometry of planetesimal systems. We demonstrate the range of problems that can be addressed by presenting simulations that illustrate oligarchic growth of protoplanets, planet formation in the presence of giant planet perturbations, the formation of the jovian moons, and orbital migration via planetesimal scattering. We also describe methods under development for increasing the timescale of the simulations by several orders of magnitude.

  13. Effects of Inert Dust Clouds on the Extinction of Strained, Laminar Flames at Normal and Micro Gravity

    NASA Technical Reports Server (NTRS)

    Andac, M. Gurhan; Egolfopoulos, Fokion N.; Campbell, Charles S.; Lauvergne, Romain; Wu, Ming-Shin (Technical Monitor)

    2000-01-01

    A combined experimental and detailed numerical study was conducted on the interaction between chemically inert solid particles and strained, atmospheric methane/air and propane/air laminar flames, both premixed and non-premixed. Experimentally, the opposed jet configuration was used with the addition of a particle seeder capable of operating in conditions of varying gravity. The particle seeding system was calibrated under both normal and micro gravity and a noticeable gravitational effect was observed. Flame extinction experiments were conducted at normal gravity by seeding inert particles at various number densities and sizes into the reacting gas phase. Experimental data were taken for 20 and 37 (mu) nickel alloy and 25 and 60 (mu) aluminum oxide particles. The experiments were simulated by solving along the stagnation streamline the conservation equations of mass, momentum, energy, and species conservation for both phases, with detailed descriptions of chemical kinetics, molecular transport, and thermal radiation. The experimental data were compared with numerical simulations, and insight was provided into the effects on extinction of the fuel type, equivalence ratio, flame configuration, strain rate. particle type. particle size. particle mass, delivery rate. the orientation of particle injection with respect to the flame and gravity. It was found that for the same injected solid mass, larger particles can result in more effective flame cooling compared to smaller particles, despite the fact that equivalent masses of the larger particles have smaller total surface area to volume ratio. This counter-intuitive finding resulted from the fact that the heat exchange between the two phases is controlled by the synergistic effect of the total contact area and the temperature difference between the two phases. Results also demonstrate that meaningful scaling of interactions between the two phases may not be possible due to the complexity of the couplings between the dynamic and thermal parameters of the problem.

  14. Mesoscale Diffractive Photonics in Geosciences

    NASA Astrophysics Data System (ADS)

    Minin, I. V.; Minin, O. V.

    2016-06-01

    The scattered light by various dielectric particles in atmosphere give information about the type of molecules and particles and their location, which are important to definition of propagation limitations through atmospheric and space weather variations, crisis communications, etc. Although these investigations explain far field properties of disturbed radiations, the solution of the physical problem requires simulations of the interactions in near-field. It has been shown that strongly localized EM field near the surface of single dielectric particle may be form by non-spherical and non-symmetrical mesoscale particles both as in transmitting as in reflection mode. It was also shown that the main lobe is narrower in case of 3 cube chain than single cube in far field, but there are many side-scattering lobes. It was mentioned that unique advantages provided by mesoscale dielectric photonic crystal based particles with three spatial dimensions of arbitrary shape allow developing a new types of micro/nano-probes with subwavelength resolution for ultra compact spectrometer-free sensor for on board a spacecraft or a plane.

  15. Parameter Estimation of Fractional-Order Chaotic Systems by Using Quantum Parallel Particle Swarm Optimization Algorithm

    PubMed Central

    Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng

    2015-01-01

    Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158

  16. Particle connectedness and cluster formation in sequential depositions of particles: integral-equation theory.

    PubMed

    Danwanichakul, Panu; Glandt, Eduardo D

    2004-11-15

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  17. Particle connectedness and cluster formation in sequential depositions of particles: Integral-equation theory

    NASA Astrophysics Data System (ADS)

    Danwanichakul, Panu; Glandt, Eduardo D.

    2004-11-01

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  18. Moving charged particles in lattice Boltzmann-based electrokinetics

    NASA Astrophysics Data System (ADS)

    Kuron, Michael; Rempfer, Georg; Schornbaum, Florian; Bauer, Martin; Godenschwager, Christian; Holm, Christian; de Graaf, Joost

    2016-12-01

    The motion of ionic solutes and charged particles under the influence of an electric field and the ensuing hydrodynamic flow of the underlying solvent is ubiquitous in aqueous colloidal suspensions. The physics of such systems is described by a coupled set of differential equations, along with boundary conditions, collectively referred to as the electrokinetic equations. Capuani et al. [J. Chem. Phys. 121, 973 (2004)] introduced a lattice-based method for solving this system of equations, which builds upon the lattice Boltzmann algorithm for the simulation of hydrodynamic flow and exploits computational locality. However, thus far, a description of how to incorporate moving boundary conditions into the Capuani scheme has been lacking. Moving boundary conditions are needed to simulate multiple arbitrarily moving colloids. In this paper, we detail how to introduce such a particle coupling scheme, based on an analogue to the moving boundary method for the pure lattice Boltzmann solver. The key ingredients in our method are mass and charge conservation for the solute species and a partial-volume smoothing of the solute fluxes to minimize discretization artifacts. We demonstrate our algorithm's effectiveness by simulating the electrophoresis of charged spheres in an external field; for a single sphere we compare to the equivalent electro-osmotic (co-moving) problem. Our method's efficiency and ease of implementation should prove beneficial to future simulations of the dynamics in a wide range of complex nanoscopic and colloidal systems that were previously inaccessible to lattice-based continuum algorithms.

  19. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less

  20. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  1. A correction procedure for thermally two-way coupled point-particles

    NASA Astrophysics Data System (ADS)

    Horwitz, Jeremy; Ganguli, Swetava; Mani, Ali; Lele, Sanjiva

    2017-11-01

    Development of a robust procedure for the simulation of two-way coupled particle-laden flows remains a challenge. Such systems are characterized by O(1) or greater mass of particles relative to the fluid. The coupling of fluid and particle motion via a drag model means the undisturbed fluid velocity evaluated at the particle location (which is needed in the drag model) is no longer equal to the interpolated fluid velocity at the particle location. The same issue arises in problems of dispersed flows in the presence of heat transfer. The heat transfer rate to each particle depends on the difference between the particle's temperature and the undisturbed fluid temperature. We borrow ideas from the correction scheme we have developed for particle-fluid momentum coupling by developing a procedure to estimate the undisturbed fluid temperature given the disturbed temperature field created by a point-particle. The procedure is verified for the case of a particle settling under gravity and subject to radiation. The procedure is developed in the low Peclet, low Boussinesq number limit, but we will discuss the applicability of the same correction procedure outside of this regime when augmented by appropriate drag and heat exchange correlations. Supported by DOE, J. H. Supported by NSF GRF

  2. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  3. Risk assessment of virus infections in the Oder estuary (southern Baltic) on the basis of spatial transport and virus decay simulations.

    PubMed

    Schernewski, G; Jülich, W D

    2001-05-01

    The large Oder (Szczecin) Lagoon (687 km2) at the German-Polish border, close to the Baltic Sea, suffers from severe eutrophication and water quality problems due to high discharge of water, nutrients and pollutants by the river Oder. Sewage treatment around the lagoon has been very much improved during the last years, but large amounts of sewage still enter the Oder river. Human pathogenic viruses generally can be expected in all surface waters that are affected by municipal sewage. There is an increasing awareness that predisposed persons can be infected by a few infective units or even a single active virus. Another new aspect is, that at least polioviruses attached to suspended particles can be infective for weeks and therefore be transported over long distances. Therefore, the highest risk of virus inputs arise from the large amounts of untreated sewage of the city of Szczecin (Poland), which are released into the river Oder and transported to the lagoon and the Baltic Sea. Summer tourism is the most important economical factor in this coastal region and further growth is expected. Human pathogenic viruses might be a serious problem for bathing water quality and sustainable summer tourism. The potential hazard of virus infections along beaches and shores of the Oder lagoon and adjacent parts of the Baltic Sea is evaluated on the basis of model simulations and laboratory results. We used two scenarios for the Older Lagoon considering free viruses and viruses attached to suspended particle matter. The spatial impact of the average virus release in the city of Szczecin during summer (bathing period) was simulated with a hydrodynamic and particle tracking model. Simulations suggest that due to fast inactivation, free viruses in the water represent a risk only in the river and near the river mouth. On the other hand, viruses attached to suspended matter can affect large areas of the eastern, Polish part of the lagoon (Grosses Haff). At the same time the accumulation of viruses on suspended particulate matter increases the likelihood of an infection after incorporation of such a particle. There is no evidence, that there is a risk of virus infections in the western part of the lagoon (Kleines Haff) or along the outer Baltic Sea coast.

  4. Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2017-11-01

    The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.

  5. Energetic Particle Loss Estimates in W7-X

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team

    2017-10-01

    The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.

  6. A conservative scheme of drift kinetic electrons for gyrokinetic simulation of kinetic-MHD processes in toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Bao, J.; Liu, D.; Lin, Z.

    2017-10-01

    A conservative scheme of drift kinetic electrons for gyrokinetic simulations of kinetic-magnetohydrodynamic processes in toroidal plasmas has been formulated and verified. Both vector potential and electron perturbed distribution function are decomposed into adiabatic part with analytic solution and non-adiabatic part solved numerically. The adiabatic parallel electric field is solved directly from the electron adiabatic response, resulting in a high degree of accuracy. The consistency between electrostatic potential and parallel vector potential is enforced by using the electron continuity equation. Since particles are only used to calculate the non-adiabatic response, which is used to calculate the non-adiabatic vector potential through Ohm's law, the conservative scheme minimizes the electron particle noise and mitigates the cancellation problem. Linear dispersion relations of the kinetic Alfvén wave and the collisionless tearing mode in cylindrical geometry have been verified in gyrokinetic toroidal code simulations, which show that the perpendicular grid size can be larger than the electron collisionless skin depth when the mode wavelength is longer than the electron skin depth.

  7. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.

  8. Evaluation on simultaneous removal of particles and off-flavors using population balance for application of powdered activated carbon in dissolved air flotation process.

    PubMed

    Kwak, D H; Yoo, S J; Lee, E J; Lee, J W

    2010-01-01

    Most of the water treatment plants applying the DAF process are faced with off-flavors control problems. For simultaneous control of particles of impurities and dissolved organics that cause pungent taste and odor in water, an effective method would be the simple application of powdered activated carbon (PAC) in the DAF process. A series of experiments were carried out to explore the feasibility for simultaneous removal of kaolin particles and organic compounds that produce off-flavors (2-MIB and geosmin). In addition, the flotation efficiency of kaolin and PAC particles adsorbing organics in the DAF process was evaluated by employing the population balance theory. The removal efficiency of 2-MIB and geosmin under the treatment condition with kaolin particles for simultaneous treatment was lower than that of the individual treatment. The decrease in the removal efficiency was probably caused by 2-MIB and geosmin remaining in the PAC particle in the treated water of DAF after bubble flotation. Simulation results obtained by the population balance model indicate, that the initial collision-attachment efficiency of PAC particles was lower than that of kaolin particles.

  9. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    DOE PAGES

    Warren, Michael S.

    2014-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy andmore » scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less

  10. Quantum Simulation of Tunneling in Small Systems

    PubMed Central

    Sornborger, Andrew T.

    2012-01-01

    A number of quantum algorithms have been performed on small quantum computers; these include Shor's prime factorization algorithm, error correction, Grover's search algorithm and a number of analog and digital quantum simulations. Because of the number of gates and qubits necessary, however, digital quantum particle simulations remain untested. A contributing factor to the system size required is the number of ancillary qubits needed to implement matrix exponentials of the potential operator. Here, we show that a set of tunneling problems may be investigated with no ancillary qubits and a cost of one single-qubit operator per time step for the potential evolution, eliminating at least half of the quantum gates required for the algorithm and more than that in the general case. Such simulations are within reach of current quantum computer architectures. PMID:22916333

  11. Incompressible SPH Model for Simulating Violent Free-Surface Fluid Flows

    NASA Astrophysics Data System (ADS)

    Staroszczyk, Ryszard

    2014-06-01

    In this paper the problem of transient gravitational wave propagation in a viscous incompressible fluid is considered, with a focus on flows with fast-moving free surfaces. The governing equations of the problem are solved by the smoothed particle hydrodynamics method (SPH). In order to impose the incompressibility constraint on the fluid motion, the so-called projection method is applied in which the discrete SPH equations are integrated in time by using a fractional-step technique. Numerical performance of the proposed model has been assessed by comparing its results with experimental data and with results obtained by a standard (weakly compressible) version of the SPH approach. For this purpose, a plane dam-break flow problem is simulated, in order to investigate the formation and propagation of a wave generated by a sudden collapse of a water column initially contained in a rectangular tank, as well as the impact of such a wave on a rigid vertical wall. The results of simulations show the evolution of the free surface of water, the variation of velocity and pressure fields in the fluid, and the time history of pressures exerted by an impacting wave on a wall.

  12. Viscosity of dilute suspensions of rodlike particles: A numerical simulation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Satoru; Matsuoka, Takaaki

    1994-02-01

    The recently developed simulation method, named as the particle simulation method (PSM), is extended to predict the viscosity of dilute suspensions of rodlike particles. In this method a rodlike particle is modeled by bonded spheres. Each bond has three types of springs for stretching, bending, and twisting deformation. The rod model can therefore deform by changing the bond distance, bond angle, and torsion angle between paired spheres. The rod model can represent a variety of rigidity by modifying the bond parameters related to Young's modulus and the shear modulus of the real particle. The time evolution of each constituent sphere of the rod model is followed by molecular-dynamics-type approach. The intrinsic viscosity of a suspension of rodlike particles is derived from calculating an increased energy dissipation for each sphere of the rod model in a viscous fluid. With and without deformation of the particle, the motion of the rodlike particle was numerically simulated in a three-dimensional simple shear flow at a low particle Reynolds number and without Brownian motion of particles. The intrinsic viscosity of the suspension of rodlike particles was investigated on orientation angle, rotation orbit, deformation, and aspect ratio of the particle. For the rigid rodlike particle, the simulated rotation orbit compared extremely well with theoretical one which was obtained for a rigid ellipsoidal particle by use of Jeffery's equation. The simulated dependence of the intrinsic viscosity on various factors was also identical with that of theories for suspensions of rigid rodlike particles. For the flexible rodlike particle, the rotation orbit could be obtained by the particle simulation method and it was also cleared that the intrinsic viscosity decreased as occurring of recoverable deformation of the rodlike particle induced by flow.

  13. Augmenting Sand Simulation Environments through Subdivision and Particle Refinement

    NASA Astrophysics Data System (ADS)

    Clothier, M.; Bailey, M.

    2012-12-01

    Recent advances in computer graphics and parallel processing hardware have provided disciplines with new methods to evaluate and visualize data. These advances have proven useful for earth and planetary scientists as many researchers are using this hardware to process large amounts of data for analysis. As such, this has provided opportunities for collaboration between computer graphics and the earth sciences. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs, we are investigating techniques for simulating the behavior of sand. We are also collaborating with the Jet Propulsion Laboratory's (JPL) DARTS Lab to exchange ideas and gain feedback on our research. The DARTS Lab specializes in simulation of planetary vehicles, such as the Mars rovers. Their simulations utilize a virtual "sand box" to test how a planetary vehicle responds to different environments. Our research builds upon this idea to create a sand simulation framework so that planetary environments, such as the harsh, sandy regions on Mars, are more fully realized. More specifically, we are focusing our research on the interaction between a planetary vehicle, such as a rover, and the sand beneath it, providing further insight into its performance. Unfortunately, this can be a computationally complex problem, especially if trying to represent the enormous quantities of sand particles interacting with each other. However, through the use of high-performance computing, we have developed a technique to subdivide areas of actively participating sand regions across a large landscape. Similar to a Level of Detail (LOD) technique, we only subdivide regions of a landscape where sand particles are actively participating with another object. While the sand is within this subdivision window and moves closer to the surface of the interacting object, the sand region subdivides into smaller regions until individual sand particles are left at the surface. As an example, let's say there is a planetary rover interacting with our sand simulation environment. Sand that is actively interacting with a rover wheel will be represented as individual particles whereas sand that is further under the surface will be represented by larger regions of sand. The result of this technique allows for many particles to be represented without the computational complexity. In developing this method, we have further generalized these subdivision regions into any volumetric area suitable for use in the simulation. This is a further improvement of our method as it allows for more compact subdivision sand regions. This helps to fine tune the simulation so that more emphasis can be placed on regions of actively participating sand. We feel that through the generalization of our technique, our research can provide other opportunities within the earth and planetary sciences. Through collaboration with our academic colleagues, we continue to refine our technique and look for other opportunities to utilize our research.

  14. Simulations of Model Microswimmers with Fully Resolved Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Oyama, Norihiro; Molina, John J.; Yamamoto, Ryoichi

    2017-10-01

    Swimming microorganisms, which include bacteria, algae, and spermatozoa, play a fundamental role in most biological processes. These swimmers are a special type of active particle, that continuously convert local energy into propulsive forces, thereby allowing them to move through their surrounding fluid medium. While the size, shape, and propulsion mechanism vary from one organism to the next, they share certain general characteristics: they exhibit force-free motion and they swim at a small Reynolds number. To study the dynamics of such systems, we use the squirmer model, which provides an ideal representation of swimmers as spheroidal particles that propel owing to a modified boundary condition at their surface. We have considered the single-particle and many-particle dynamics of swimmers in bulk and confined systems using the smoothed profile method, which allows us to efficiently solve the coupled particle-fluid problem. For the single-particle dynamics, we studied the diffusive behavior caused by the swimming of the particles. At short-time scales, the diffusion is caused by the hydrodynamic interactions, whereas at long-time scales, it is determined by the particle-particle collisions. Thus, the short-time diffusion will be the same for both swimmers and inert tracer particles. We then investigated the dynamics of confined microswimmers using cylindrical and parallel-plate confining walls. For the cylindrical confinement, we find evidence of an order/disorder phase transition which depends on the specific type of swimmers and the size of the cylinder. Under parallel-plane walls, some swimmers exhibit wavelike modes, which lead to traveling density waves that bounce back and forth between the walls. From an analysis of the bulk systems, we can show that this wavelike motion can be understood as a pseudoacoustic mode and is a consequence of the intrinsic swimming properties of the particles. The results presented here, together with the simulation method that we have developed, allow us to better understand the complex hydrodynamic interactions in microswimmer dispersions.

  15. Autonomous sensor manager agents (ASMA)

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.

    2004-04-01

    Autonomous sensor manager agents are presented as an algorithm to perform sensor management within a multisensor fusion network. The design of the hybrid ant system/particle swarm agents is described in detail with some insight into their performance. Although the algorithm is designed for the general sensor management problem, a simulation example involving 2 radar systems is presented. Algorithmic parameters are determined by the size of the region covered by the sensor network, the number of sensors, and the number of parameters to be selected. With straight forward modifications, this algorithm can be adapted for most sensor management problems.

  16. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less

  17. Research on Orbital Plasma-Electrodynamics (ROPE)

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Wright, K.

    1994-01-01

    Since the development of probe theory by Langmuir and Blodgett, the problem of current collection by a charged spherically or cylindrically symmetric body has been investigated by a number of authors. This paper overviews the development of a fully three-dimensional particle simulation code which can be used to understand the physics of current collection in three dimensions and can be used to analyze data resulting from the future tethered satellite system (TSS). According to the TSS configurations, two types of particle simulation models were constructed: a simple particle simulation (SIPS) and a super particle simulation (SUPS). The models study the electron transient response and its asymptotic behavior around a three dimensional, highly biased satellite. The potential distribution surrounding the satellite is determined by solving Laplace's equation in the SIPS model and by solving Poisson's equation in the SUPS model. Thus, the potential distribution in space is independent of the density distribution of the particles in the SUPS model but it does depend on the density distribution of the particles in the SUPS model. The evolution of the potential distribution in the SUPS model is described. When the spherical satellite is charged to a highly positive potential and immersed in a plasma with a uniform magnetic field, the formation of an electron torus in the equatorial plane (the plane in perpendicular to the magnetic field) and elongation of the torus along the magnetic field are found in both the SIPS and the SUPS models but the shape of the torus is different. The areas of high potential that exist in the polar regions in the SUPS model exaggerate the elongation of the electron torus along the magnetic field. The current collected by the satellite for different magentic field strengths is investigated in both models. Due to the nonlinear effects present in SUPS, the oscillating phenomenon of the current collection curve during the first 10 plasma periods can be seen (this does not appear in SIPS). From the parametric studies, it appears that the oscillating phenomenon of the current collection curve occurs only when the magnetic field strength is less than 0.2 gauss for the present model.

  18. Development of a particle method of characteristics (PMOC) for one-dimensional shock waves

    NASA Astrophysics Data System (ADS)

    Hwang, Y.-H.

    2018-03-01

    In the present study, a particle method of characteristics is put forward to simulate the evolution of one-dimensional shock waves in barotropic gaseous, closed-conduit, open-channel, and two-phase flows. All these flow phenomena can be described with the same set of governing equations. The proposed scheme is established based on the characteristic equations and formulated by assigning the computational particles to move along the characteristic curves. Both the right- and left-running characteristics are traced and represented by their associated computational particles. It inherits the computational merits from the conventional method of characteristics (MOC) and moving particle method, but without their individual deficiencies. In addition, special particles with dual states deduced to the enforcement of the Rankine-Hugoniot relation are deliberately imposed to emulate the shock structure. Numerical tests are carried out by solving some benchmark problems, and the computational results are compared with available analytical solutions. From the derivation procedure and obtained computational results, it is concluded that the proposed PMOC will be a useful tool to replicate one-dimensional shock waves.

  19. [Research on the measurement range of particle size with total light scattering method in vis-IR region].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Dai, Jing-min

    2008-12-01

    The problem of determining the particle size range in the visible-infrared region was studied using the independent model algorithm in the total scattering technique. By the analysis and comparison of the accuracy of the inversion results for different R-R distributions, the measurement range of particle size was determined. Meanwhile, the corrected extinction coefficient was used instead of the original extinction coefficient, which could determine the measurement range of particle size with higher accuracy. Simulation experiments illustrate that the particle size distribution can be retrieved very well in the range from 0. 05 to 18 microm at relative refractive index m=1.235 in the visible-infrared spectral region, and the measurement range of particle size will vary with the varied wavelength range and relative refractive index. It is feasible to use the constrained least squares inversion method in the independent model to overcome the influence of the measurement error, and the inverse results are all still satisfactory when 1% stochastic noise is added to the value of the light extinction.

  20. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-10-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the codemore » necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment.« less

  1. Coupled discrete element and finite volume solution of two classical soil mechanics problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Feng; Drumm, Eric; Guiochon, Georges A

    One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less

  2. Scaling of plane-wave functions in statistically optimized near-field acoustic holography.

    PubMed

    Hald, Jørgen

    2014-11-01

    Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.

  3. New methods and astrophysical applications of adaptive mesh fluid simulations

    NASA Astrophysics Data System (ADS)

    Wang, Peng

    The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative cooling module, we performed the first MHD simulations of disk galaxy formation. We find that the initial magnetic fields are quickly amplified to Milky-Way strength in a self-regulated way with amplification rate roughly one e-folding per orbit. This suggests that Milky Way strength magnetic field might be common in high redshift disk galaxies. We have also developed AMR relativistic hydrodynamics code to simulate black hole relativistic jets. We discuss the coupling of the AMR framework with various relativistic solvers and conducted extensive algorithmic comparisons. Via various test problems, we emphasize the importance of resolution studies in relativistic flow simulations because extremely high resolution is required especially when shear flows are present in the problem. Then we present the results of 3D simulations of supermassive black hole jets propagation and gamma ray burst jet breakout. Resolution studies of the two 3D jets simulations further highlight the need of high resolutions to calculate accurately relativistic flow problems. Finally, to push forward the kind of simulations described above, we need faster codes with more physics included. We describe an implementation of compressible inviscid fluid solvers with AMR on Graphics Processing Units (GPU) using NVIDIA's CUDA. We show that the class of high resolution shock capturing schemes can be mapped naturally on this architecture. For both uniform and adaptive simulations, we achieve an overall speedup of approximately 10 times faster execution on one Quadro FX 5600 GPU as compared to a single 3 GHz Intel core on the host computer. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case.

  4. Turbulence-induced relative velocity of dust particles. III. The probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Liubin; Padoan, Paolo; Scalo, John, E-mail: lpan@cfa.harvard.edu, E-mail: ppadoan@icc.ub.edu, E-mail: parrot@astro.as.utexas.edu

    2014-09-01

    Motivated by its important role in the collisional growth of dust particles in protoplanetary disks, we investigate the probability distribution function (PDF) of the relative velocity of inertial particles suspended in turbulent flows. Using the simulation from our previous work, we compute the relative velocity PDF as a function of the friction timescales, τ{sub p1} and τ{sub p2}, of two particles of arbitrary sizes. The friction time of the particles included in the simulation ranges from 0.1τ{sub η} to 54T {sub L}, where τ{sub η} and T {sub L} are the Kolmogorov time and the Lagrangian correlation time of themore » flow, respectively. The relative velocity PDF is generically non-Gaussian, exhibiting fat tails. For a fixed value of τ{sub p1}, the PDF shape is the fattest for equal-size particles (τ{sub p2} = τ{sub p1}), and becomes thinner at both τ{sub p2} < τ{sub p1} and τ{sub p2} > τ{sub p1}. Defining f as the friction time ratio of the smaller particle to the larger one, we find that, at a given f in (1/2) ≲ f ≲ 1, the PDF fatness first increases with the friction time τ{sub p,h} of the larger particle, peaks at τ{sub p,h} ≅ τ{sub η}, and then decreases as τ{sub p,h} increases further. For 0 ≤ f ≲ (1/4), the PDF becomes continuously thinner with increasing τ{sub p,h}. The PDF is nearly Gaussian only if τ{sub p,h} is sufficiently large (>>T {sub L}). These features are successfully explained by the Pan and Padoan model. Using our simulation data and some simplifying assumptions, we estimated the fractions of collisions resulting in sticking, bouncing, and fragmentation as a function of the dust size in protoplanetary disks, and argued that accounting for non-Gaussianity of the collision velocity may help further alleviate the bouncing barrier problem.« less

  5. SEM Microanalysis of Particles: Concerns and Suggestions

    NASA Astrophysics Data System (ADS)

    Fournelle, J.

    2008-12-01

    The scanning electron microscope (SEM) is well suited to examine and characterize small (i.e. <10 micron) particles. Particles can be imaged and sizes and shapes determined. With energy dispersive x-ray spectrometers (EDS), chemical compositions can be determined quickly. Despite the ease in acquiring x-ray spectra and chemical compositions, there are potentially major sources of error to be recognized. Problems with EDS analyses of small particles: Qualitive estimates of composition (e.g. stating that Si>Al>Ca>Fe plus O) are easy. However, to be able to have confidence that a chemical composition is accurate, several issues should be examined. (1) Particle Mass Effect: Is the accelerating voltage appropriate for the specimen size? Are all the incident electrons remaining inside the particle, and not traveling out of the sample side or bottom? (2) Particle Absorption Effect: What is the geometric relationship of the beam impact point to the x-ray detector? The x-ray intensity will vary by significant amounts for the same material if the grains are irregular and the path out of the sample in the direction of the detector is longer or shorter. (3) Particle Fluorescence Effect: This is generally a smaller error, but should be considered: for small particles, using large standards, there will be a few % less x-rays generated in a small particle relative to one of the same composition and 50-100 times larger. Also, if the sample sits on a grid of a particular composition (e.g. Si wafer) potentially several % of Si could appear in the analysis. (4) In a increasing number of laboratories, with environmental or variable pressure SEMs, the Gas Skirt Effect is operating against you: here the incident electron beam scatters in the gas in the chamber, with less electrons impacting the target spot and some others hitting grains 100s of microns away, producing spectra that could be faulty. (5) Inclusion of measured oxygen: if the measured oxygen x-ray counts are utilized, significant errors can be introduced by differential absorption of this low energy x-ray. (6) Standardless Analysis: This typical method of doing EDS analysis has a major pitfall: the printed analysis is normalized to 100 wt%, thereby eliminating an important clue to analytical error. Suggestions: (1) Use lower voltage, e.g. 10 kV, reducing effects 1,2,3 above. (2) Use standards--traditional flat polished ones--and don't initially normalize totals. Discrepancies can be observed and addressed, not ignored. (3) Alway include oxygen by stoichometry, not measured. (4) Experimental simulation. Using material of constant composition (e.g. NIST glass K-411, or other homogeneous multi-element material with the elements of interest), grind into fragments of similar size to your unknowns, and see what is the analytical error for measurements of these known particles. Analyses of your unknown material will be no better, and probably worse than that, particularly if the grains are smaller. The results of this experiment should be reported whenever discussing measurements on the unknown materials. (5) Monte Carlo simulation. Programs such PENEPMA allows creation of complex geometry samples (and samples on substrates) and resulting EDS spectra can be generated. This allows estimation of errors for representative cases. It is slow, however; other simulations such as DTSA-II promise faster simulations with some limitations. (6) EBSD: this is a perfectly suited for some problems with SEM identification of small particles, e.g. distinguishing magnetite (Fe3O4) from hematite (Fe2O3), which is virtually impossible to do by EDS. With the appropriate hardware and software, electron diffraction patterns on particles can be gathered and the crystal type determined.

  6. Pairwise Interaction Extended Point-Particle (PIEP) model for multiphase jets and sedimenting particles

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Balachandar, S.

    2017-11-01

    We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.

  7. Simulation of deterministic energy-balance particle agglomeration in turbulent liquid-solid flows

    NASA Astrophysics Data System (ADS)

    Njobuenwu, Derrick O.; Fairweather, Michael

    2017-08-01

    An efficient technique to simulate turbulent particle-laden flow at high mass loadings within the four-way coupled simulation regime is presented. The technique implements large-eddy simulation, discrete particle simulation, a deterministic treatment of inter-particle collisions, and an energy-balanced particle agglomeration model. The algorithm to detect inter-particle collisions is such that the computational costs scale linearly with the number of particles present in the computational domain. On detection of a collision, particle agglomeration is tested based on the pre-collision kinetic energy, restitution coefficient, and van der Waals' interactions. The performance of the technique developed is tested by performing parametric studies on the influence of the restitution coefficient (en = 0.2, 0.4, 0.6, and 0.8), particle size (dp = 60, 120, 200, and 316 μm), Reynolds number (Reτ = 150, 300, and 590), and particle concentration (αp = 5.0 × 10-4, 1.0 × 10-3, and 5.0 × 10-3) on particle-particle interaction events (collision and agglomeration). The results demonstrate that the collision frequency shows a linear dependency on the restitution coefficient, while the agglomeration rate shows an inverse dependence. Collisions among smaller particles are more frequent and efficient in forming agglomerates than those of coarser particles. The particle-particle interaction events show a strong dependency on the shear Reynolds number Reτ, while increasing the particle concentration effectively enhances particle collision and agglomeration whilst having only a minor influence on the agglomeration rate. Overall, the sensitivity of the particle-particle interaction events to the selected simulation parameters is found to influence the population and distribution of the primary particles and agglomerates formed.

  8. New methods in WARP, a particle-in-cell code for space-charge dominated beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D., LLNL

    1998-01-12

    The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less

  9. A collision scheme for hybrid fluid-particle simulation of plasmas

    NASA Astrophysics Data System (ADS)

    Nguyen, Christine; Lim, Chul-Hyun; Verboncoeur, John

    2006-10-01

    Desorption phenomena at the wall of a tokamak can lead to the introduction of impurities at the edge of a thermonuclear plasma. In particular, the use of carbon as a constituent of the tokamak wall, as planned for ITER, requires the study of carbon and hydrocarbon transport in the plasma, including understanding of collisional interaction with the plasma. These collisions can result in new hydrocarbons, hydrogen, secondary electrons and so on. Computational modeling is a primary tool for studying these phenomena. XOOPIC [1] and OOPD1 are widely used computer modeling tools for the simulation of plasmas. Both are particle type codes. Particle simulation gives more kinetic information than fluid simulation, but more computation time is required. In order to reduce this disadvantage, hybrid simulation has been developed, and applied to the modeling of collisions. Present particle simulation tools such as XOOPIC and OODP1 employ a Monte Carlo model for the collisions between particle species and a neutral background gas defined by its temperature and pressure. In fluid-particle hybrid plasma models, collisions include combinations of particle and fluid interactions categorized by projectile-target pairing: particle-particle, particle-fluid, and fluid-fluid. For verification of this hybrid collision scheme, we compare simulation results to analytic solutions for classical plasma models. [1] Verboncoeur et al. Comput. Phys. Comm. 87, 199 (1995).

  10. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  11. Adaptive particle-based pore-level modeling of incompressible fluid flow in porous media: a direct and parallel approach

    NASA Astrophysics Data System (ADS)

    Ovaysi, S.; Piri, M.

    2009-12-01

    We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is obtained for sample B that has more uniform distribution of solid particles leading to a superior load balancing. The model is then used to simulate fluid flow directly in REV size three-dimensional x-ray images of a naturally occurring sandstone. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability, which compares well with the available network modeling and Lattice-Boltzmann permeabilities available in the literature for the same sandstone. We show that the model conserves mass very well and is stable computationally even at very narrow fluid conduits. The transient- and the steady-state fluid flow patterns are presented as well as the steady-state flow rates to compute absolute permeability. Furthermore, we discuss the vital role of our adaptive particle resolution scheme in preserving the original pore connectivity of the samples and their narrow channels through splitting and merging of fluid particles.

  12. Solar-energetic particles as a probe of the inner heliosphere

    NASA Astrophysics Data System (ADS)

    Chollet, Eileen Emily

    2008-06-01

    In this dissertation, I explore the relationship between solar energetic particles (SEPs) and the interplanetary magnetic field, and I use observations of SEPs to probe the region of space between the Sun and the Earth. After an introduction of major concepts in heliospheric physics, describing some of the history of energetic particles and defining the data sets used in the work, the rest of this dissertation is organized around three major concepts related to energetic particle transport: magnetic field-line length, interplanetary turbulence, and particle scattering and diffusion. In Chapter 2, I discuss how energetic particles can be used to measure the lengths of field lines and how particle scattering complicates the interpretation of these measurements. I then propose applying these measurements to a particular open problem: the origin and properties of heliospheric current sheets. In the next chapter, I move from the large to small scale and apply energetic particle measurements to important problems in interplanetary turbulence. I introduce two energetic- particle features, one of which I discovered in the course of this work, which have size scales roughly that of the correlation scale of the turbulence (the largest scale over which observations are expected to be similar). I discuss how multi-spacecraft measurements of these energetic particle features can provide a measure of the correlation scale independent of the magnetic field measurements. Finally, I consider interplanetary scattering and diffusion in detail. I describe new observations of particle diffusion in the direction perpendicular to the average magnetic field, showing that particles only scatter a few times between their injection at the Sun and observation at the Earth. I also provide numerical simulation results of diffusion parallel to the field which can be used to correct for the effects of transport on the particles. These corrections allow inferences to be made about the particle energies at injection from observations of the event-integrated fluences at 1 AU. By carefully including scattering, cooling, field line meandering and turbulence effects, solar-energetic particles become a powerful tool for studying the inner heliosphere.

  13. Effect of gravity on terminal particle settling velocity on Moon, Mars and Earth

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus J.

    2013-04-01

    Gravity has a non-linear effect on the settling velocity of sediment particles in liquids and gases due to the interdependence of settling velocity, drag and friction. However, StokeśLaw, the common way of estimating the terminal velocity of a particle moving in a gas of liquid assumes a linear relationship between terminal velocity and gravity. For terrestrial applications, this "error" is not relevant, but it may strongly influence the terminal velocity achieved by settling particles on Mars. False estimates of these settling velocities will, in turn, affect the interpretation of particle sizes observed in sedimentary rocks on Mars. Wrong interpretations may occur, for example, when the texture of sedimentary rocks is linked to the amount and hydraulics of runoff and thus ultimately the environmental conditions on Mars at the time of their formation. A good understanding of particle behaviour in liquids on Mars is therefore essential. In principle, the effect of lower gravity on settling velocity can also be achieved by reducing the difference in density between particle and gas or liquid. However, the use of such analogues simulating the lower gravity on Mars on Earth is creates other problems because the properties (i.e. viscosity) and interaction of the liquids and sediment (i.e. flow around the boundary layer between liquid and particle) differ from those of water and mineral particles. An alternative for measuring the actual settling velocities of particles under Martian gravity, on Earth, is offered by placing a settling tube on a reduced gravity flight and conduct settling tests within the 20 to 25 seconds of Martian gravity that can be simulated during such a flight. In this presentation we report the results of such a test conducted during a reduced gravity flight in November 2012. The results explore the strength of the non-linearity in the gravity-settling velocity relationship for terrestrial, lunar and Martian gravity.

  14. Tracking control of colloidal particles through non-homogeneous stationary flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Híjar, Humberto, E-mail: humberto.hijar@lasallistas.org.mx

    2013-12-21

    We consider the problem of controlling the trajectory of a single colloidal particle in a fluid with steady non-homogeneous flow. We use a Langevin equation to describe the dynamics of this particle, where the friction term is assumed to be given by the Faxén's Theorem for the force on a sphere immersed in a stationary flow. We use this description to propose an explicit control force field to be applied on the particle such that it will follow asymptotically any given desired trajectory, starting from an arbitrary initial condition. We show that the dynamics of the controlled particle can bemore » mapped into a set of stochastic harmonic oscillators and that the velocity gradient of the solvent induces an asymmetric coupling between them. We study the particular case of a Brownian particle controlled through a plane Couette flow and show explicitly that the velocity gradient of the solvent renders the dynamics non-stationary and non-reversible in time. We quantify this effect in terms of the correlation functions for the position of the controlled particle, which turn out to exhibit contributions depending exclusively on the non-equilibrium character of the state of the solvent. In order to test the validity of our model, we perform simulations of the controlled particle moving in a simple shear flow, using a hybrid method combining molecular dynamics and multi-particle collision dynamics. We confirm numerically that the proposed guiding force allows for controlling the trajectory of the micro-sized particle by obligating it to follow diverse specific trajectories in fluids with homogeneous shear rates of different strengths. In addition, we find that the non-equilibrium correlation functions in simulations exhibit the same qualitative behavior predicted by the model, thus revealing the presence of the asymmetric non-equilibrium coupling mechanism induced by the velocity gradient.« less

  15. Inclusion of heat transfer computations for particle laden flows

    NASA Astrophysics Data System (ADS)

    Feng, Zhi-Gang; Michaelides, Efstathios E.

    2008-04-01

    A newly developed direct numerical simulation method has been used to study the dynamics of nonisothermal cylindrical particles in particulate flows. The momentum and energy transfer equations are solved to compute the effects of heat transfer in the sedimentation of particles. Among the effects examined is the drag force on nonisothermal particles, which we found strongly depends on the Reynolds and Grashof numbers. It was observed that heat advection between hotter particles and fluid causes the drag coefficient of particles to significantly increase at relatively low Reynolds numbers. For Grashof number of 100, the drag enhancement effect diminishes when the Reynolds number exceeds 50. On the contrary, heat advection with colder particles reduces the drag coefficient for low and medium Reynolds number (Re<50) for Grashof number of -100. We used this numerical method to study the problem of a pair of hot particles settling in a container at different Grashof numbers. In isothermal cases, such a pair of particles would undergo the well-known drafting-kissing-tumbling (DKT) motion. However, it was observed that the buoyancy currents induced by the hotter particles reverse the DKT motion of the particles or suppress it altogether. Finally, the sedimentation of a circular cluster of 172 particles in an enclosure at two different Grashof numbers was studied and the main features of the results are presented.

  16. Particle Number Dependence of the N-body Simulations of Moon Formation

    NASA Astrophysics Data System (ADS)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  17. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  18. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  19. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Improved particle position accuracy from off-axis holograms using a Chebyshev model.

    PubMed

    Öhman, Johan; Sjödahl, Mikael

    2018-01-01

    Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.

  1. QMRPF-UKF Master-Slave Filtering for the Attitude Determination of Micro-Nano Satellites Using Gyro and Magnetometer

    PubMed Central

    Cui, Peiling; Zhang, Huijuan

    2010-01-01

    In this paper, the problem of estimating the attitude of a micro-nano satellite, obtaining geomagnetic field measurements via a three-axis magnetometer and obtaining angle rate via gyro, is considered. For this application, a QMRPF-UKF master-slave filtering method is proposed, which uses the QMRPF and UKF algorithms to estimate the rotation quaternion and the gyro bias parameters, respectively. The computational complexicity related to the particle filtering technique is eliminated by introducing a multiresolution approach that permits a significant reduction in the number of particles. This renders QMRPF-UKF master-slave filter computationally efficient and enables its implementation with a remarkably small number of particles. Simulation results by using QMRPF-UKF are given, which demonstrate the validity of the QMRPF-UKF nonlinear filter. PMID:22163448

  2. Turbulent transport of large particles in the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Richter, D. H.; Chamecki, M.

    2017-12-01

    To describe the transport of heavy dust particles in the atmosphere, assumptions must typically be made in order to connect the micro-scale emission processes with the larger-scale atmospheric motions. In the context of numerical models, this can be thought of as the transport process which occurs between the domain bottom and the first vertical grid point. For example, in the limit of small particles (both low inertia and low settling velocity), theory built upon Monin-Obukhov similarity has proven effective in relating mean dust concentration profiles to surface emission fluxes. For increasing particle mass, however, it becomes more difficult to represent dust transport as a simple extension of the transport of a passive scalar due to issues such as the crossing trajectories effect. This study focuses specifically on the problem of large particle transport and dispersion in the turbulent boundary layer by utilizing direct numerical simulations with Lagrangian point-particle tracking to determine under what, if any, conditions the large dust particles (larger than 10 micron in diameter) can be accurately described in a simplified Eulerian framework. In particular, results will be presented detailing the independent contributions of both particle inertia and particle settling velocity relative to the strength of the surrounding turbulent flow, and consequences of overestimating surface fluxes via traditional parameterizations will be demonstrated.

  3. Simulation of plume dynamics by the Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Mora, Peter; Yuen, David A.

    2017-09-01

    The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.

  4. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  5. Collisionless stellar hydrodynamics as an efficient alternative to N-body methods

    NASA Astrophysics Data System (ADS)

    Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard

    2013-01-01

    The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.

  6. Automated classification of single airborne particles from two-dimensional angle-resolved optical scattering (TAOS) patterns by non-linear filtering

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni Franco; Pan, Yong-Le; Aptowicz, Kevin B.; Casati, Caterina; Pinnick, Ronald G.; Chang, Richard K.; Videen, Gorden W.

    2013-12-01

    Measurement of two-dimensional angle-resolved optical scattering (TAOS) patterns is an attractive technique for detecting and characterizing micron-sized airborne particles. In general, the interpretation of these patterns and the retrieval of the particle refractive index, shape or size alone, are difficult problems. By reformulating the problem in statistical learning terms, a solution is proposed herewith: rather than identifying airborne particles from their scattering patterns, TAOS patterns themselves are classified through a learning machine, where feature extraction interacts with multivariate statistical analysis. Feature extraction relies on spectrum enhancement, which includes the discrete cosine FOURIER transform and non-linear operations. Multivariate statistical analysis includes computation of the principal components and supervised training, based on the maximization of a suitable figure of merit. All algorithms have been combined together to analyze TAOS patterns, organize feature vectors, design classification experiments, carry out supervised training, assign unknown patterns to classes, and fuse information from different training and recognition experiments. The algorithms have been tested on a data set with more than 3000 TAOS patterns. The parameters that control the algorithms at different stages have been allowed to vary within suitable bounds and are optimized to some extent. Classification has been targeted at discriminating aerosolized Bacillus subtilis particles, a simulant of anthrax, from atmospheric aerosol particles and interfering particles, like diesel soot. By assuming that all training and recognition patterns come from the respective reference materials only, the most satisfactory classification result corresponds to 20% false negatives from B. subtilis particles and <11% false positives from all other aerosol particles. The most effective operations have consisted of thresholding TAOS patterns in order to reject defective ones, and forming training sets from three or four pattern classes. The presented automated classification method may be adapted into a real-time operation technique, capable of detecting and characterizing micron-sized airborne particles.

  7. Evaluation of the performance of the cross-flow air classifier in manufactured sand processing via CFD-DEM simulations

    NASA Astrophysics Data System (ADS)

    Petit, H. A.; Irassar, E. F.; Barbosa, M. R.

    2018-01-01

    Manufactured sands are particulate materials obtained as by product of rock crushing. Particle sizes in the sand can be as high as 6 mm and as low as a few microns. The concrete industry has been increasingly using these sands as fine aggregates to replace natural sands. The main shortcoming is the excess of particles smaller than <0.075 mm (Dust). This problem has been traditionally solved by a washing process. Air classification is being studied to replace the washing process and avoid the use of water. The complex classification process can only been understood with the aid of CFD-DEM simulations. This paper evaluates the applicability of a cross-flow air classifier to reduce the amount of dust in manufactured sands. Computational fluid dynamics (CFD) and discrete element modelling (DEM) were used for the assessment. Results show that the correct classification set up improves the size distribution of the raw materials. The cross-flow air classification is found to be influenced by the particle size distribution and the turbulence inside the chamber. The classifier can be re-designed to work at low inlet velocities to produce manufactured sand for the concrete industry.

  8. Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma

    PubMed Central

    Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan

    2014-01-01

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470

  9. Shock Interaction with Random Spherical Particle Beds

    NASA Astrophysics Data System (ADS)

    Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth

    2016-11-01

    In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  10. Parallel particle filters for online identification of mechanistic mathematical models of physiology from monitoring data: performance and real-time scalability in simulation scenarios.

    PubMed

    Zenker, Sven

    2010-08-01

    Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.

  11. CFD simulation of the combustion process of the low-emission vortex boiler

    NASA Astrophysics Data System (ADS)

    Chernov, A. A.; Maryandyshev, P. A.; Pankratov, E. V.; Lubov, V. K.

    2017-11-01

    Domestic heat and power engineering needs means and methods for optimizing the existing boiler plants in order to increase their technical, economic and environmental work. The development of modern computer technology, methods of numerical modeling and specialized software greatly facilitates the solution of many emerging problems. CFD simulation allows to obtaine precise results of thermochemical and aerodynamic processes taking place in the furnace of boilers in order to optimize their operation modes and develop directions for their modernization. The paper presents the results of simulation of the combustion process of a low-emission vortex coal boiler of the model E-220/100 using the software package Ansys Fluent. A hexahedral grid with a number of 2 million cells was constructed for the chosen boiler model. A stationary problem with a two-phase flow was solved. The gaseous components are air, combustion products and volatile substances. The solid phase is coal particles at different burnup stages. The Euler-Lagrange approach was taken as a basis. Calculation of the coal particles trajectories was carried out using the Discrete Phase Model which distribution of the size particle of coal dust was accounted for using the Rosin-Rammler equation. Partially Premixed combustion model was used as the combustion model which take into account elemental composition of the fuel and heat analysis. To take turbulence into account, a two-parameter k-ε model with a standard wall function was chosen. Heat transfer by radiation was calculated using the P1-approximation of the method of spherical harmonics. The system of spatial equations was numerically solved by the control volume method using the SIMPLE algorithm of Patankar and Spaulding. Comparison of data obtained during the industrial-operational tests of low-emission vortex boilers with the results of mathematical modeling showed acceptable convergence of the tasks of this level, which confirms the adequacy of the realized mathematical model.

  12. Using adaptive-mesh refinement in SCFT simulations of surfactant adsorption

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Kumar, Rajeev; Jamroz, Ben; Crockett, Robert; Pletzer, Alex

    2013-03-01

    Adsorption of surfactants at interfaces is relevant to many applications such as detergents, adhesives, emulsions and ferrofluids. Atomistic simulations of interface adsorption are challenging due to the difficulty of modeling the wide range of length scales in these problems: the thin interface region in equilibrium with a large bulk region that serves as a reservoir for the adsorbed species. Self-consistent field theory (SCFT) has been extremely useful for studying the morphologies of dense block copolymer melts. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. However, even SCFT methods can be difficult to apply to systems in which small spatial regions might require finer resolution than most of the simulation grid (eg. interface adsorption and confinement). We will present results on interface adsorption simulations using PolySwift++, an object-oriented, polymer SCFT simulation code aided by the Tech-X Chompst library that enables via block-structured AMR calculations with PETSc.

  13. Measurements of the ionization coefficient of simulated iron micrometeoroids

    NASA Astrophysics Data System (ADS)

    Thomas, Evan; Horányi, Mihály; Janches, Diego; Munsat, Tobin; Simolka, Jonas; Sternovsky, Zoltan

    2016-04-01

    The interpretation of meteor radar observations has remained an open problem for decades. One of the most critical parameters to establish the size of an incoming meteoroid from radar echoes is the ionization coefficient, β, which still remains poorly known. Here we report on new experiments to simulate micrometeoroid ablation in laboratory conditions to measure β for iron particles impacting N2, air, CO2, and He gases. This new data set is compared to previous laboratory data where we find agreement except for He and air impacts > 30 km/s. We calibrate the Jones model of β(v) and provide fit parameters to these gases and find agreement with all gases except CO2 and high-speed air impacts where we observe βair > 1 for velocities > 70 km/s. These data therefore demonstrate potential problems with using the Jones model for CO2 atmospheres as well as for high-speed meteors on Earth.

  14. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  15. Automated 3D trajectory measuring of large numbers of moving particles.

    PubMed

    Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu

    2011-04-11

    Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America

  16. Failure processes in soft and quasi-brittle materials with nonhomogeneous microstructures

    NASA Astrophysics Data System (ADS)

    Spring, Daniel W.

    Material failure pervades the fields of materials science and engineering; it occurs at various scales and in various contexts. Understanding the mechanisms by which a material fails can lead to advancements in the way we design and build the world around us. For example, in structural engineering, understanding the fracture of concrete and steel can lead to improved structural systems and safer designs; in geological engineering, understanding the fracture of rock can lead to increased efficiency in oil and gas extraction; and in biological engineering, understanding the fracture of bone can lead to improvements in the design of bio-composites and medical implants. In this thesis, we numerically investigate a wide spectrum of failure behavior; in soft and quasi-brittle materials with nonhomogeneous microstructures considering a statistical distribution of material properties. The first topic we investigate considers the influence of interfacial interactions on the macroscopic constitutive response of particle reinforced elastomers. When a particle is embedded into an elastomer, the polymer chains in the elastomer tend to adsorb (or anchor) onto the surface of the particle; creating a region in the vicinity of each particle (often referred to as an interphase) with distinct properties from those in the bulk elastomer. This interphasial region has been known to exist for many decades, but is primarily omitted in computational investigations of such composites. In this thesis, we present an investigation into the influence of interphases on the macroscopic constitutive response of particle filled elastomers undergoing large deformations. In addition, at large deformations, a localized region of failure tends to accumulate around inclusions. To capture this localized region of failure (often referred to as interfacial debonding), we use cohesive zone elements which follow the Park-Paulino-Roesler traction-separation relation. To account for friction, we present a new, coupled cohesive-friction relation and detail its formulation and implementation. In the process of this investigation, we developed a small library of cohesive elements for use with a commercially available finite element analysis software package. Additionally, in this thesis, we present a series of methods for reducing mesh dependency in two-dimensional dynamic cohesive fracture simulations of quasi-brittle materials. In this setting, cracks are only permitted to propagate along element facets, thus a poorly designed discretization of the problem domain can introduce artifacts into the fracture behavior. To reduce mesh induced artifacts, we consider unstructured polygonal finite elements. A randomly-seeded polygonal mesh leads to an isotropic discretization of the problem domain, which does not bias the direction of crack propagation. However, polygonal meshes tend to limit the possible directions a crack may travel at each node, making this discretization a poor candidate for dynamic cohesive fracture simulations. To alleviate this problem, we propose two new topological operators. The first operator we propose is adaptive element-splitting, and the second is adaptive mesh refinement. Both operators are designed to improve the ability of unstructured polygonal meshes to capture crack patterns in dynamic cohesive fracture simulations. However, we demonstrate that element-splitting is more suited to pervasive fracture problems, whereas, adaptive refinement is more suited to problems exhibiting a dominant crack. Finally, we investigate the use of geometric and constitutive design features to regularize pervasive fragmentation behavior in three-dimensions. Throughout pervasive fracture simulations, many cracks initiate, propagate, branch and coalesce simultaneously. Because of the cohesive element method's unique framework, this behavior can be captured in a regularized manner. In this investigation, unstructuring techniques are used to introduce randomness into a numerical model. The behavior of quasi-brittle materials undergoing pervasive fracture and fragmentation is then examined using three examples. The examples are selected to investigate some of the significant factors influencing pervasive fracture and fragmentation behavior; including, geometric features, loading conditions, and material gradation.

  17. Performance Review of Harmony Search, Differential Evolution and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Mohan Pandey, Hari

    2017-08-01

    Metaheuristic algorithms are effective in the design of an intelligent system. These algorithms are widely applied to solve complex optimization problems, including image processing, big data analytics, language processing, pattern recognition and others. This paper presents a performance comparison of three meta-heuristic algorithms, namely Harmony Search, Differential Evolution, and Particle Swarm Optimization. These algorithms are originated altogether from different fields of meta-heuristics yet share a common objective. The standard benchmark functions are used for the simulation. Statistical tests are conducted to derive a conclusion on the performance. The key motivation to conduct this research is to categorize the computational capabilities, which might be useful to the researchers.

  18. A Fractional PDE Approach to Turbulent Mixing; Part II: Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Samiee, Mehdi; Zayernouri, Mohsen

    2016-11-01

    We propose a generalizing fractional order transport model of advection-diffusion kind with fractional time- and space-derivatives, governing the evolution of passive scalar turbulence. This approach allows one to incorporate the nonlocal and memory effects in the underlying anomalous diffusion i.e., sub-to-standard diffusion to model the trapping of particles inside the eddied, and super-diffusion associated with the sudden jumps of particles from one coherent region to another. For this nonlocal model, we develop a high order numerical (spectral) method in addition to a fast solver, examined in the context of some canonical problems. PhD student, Department of Mechanical Engineering, & Department Computational Mathematics, Science, and Engineering.

  19. Euler-Lagrange Simulations of Shock Wave-Particle Cloud Interaction

    NASA Astrophysics Data System (ADS)

    Koneru, Rahul; Rollin, Bertrand; Ouellet, Frederick; Park, Chanyoung; Balachandar, S.

    2017-11-01

    Numerical experiments of shock interacting with an evolving and fixed cloud of particles are performed. In these simulations we use Eulerian-Lagrangian approach along with state-of-the-art point-particle force and heat transfer models. As validation, we use Sandia Multiphase Shock Tube experiments and particle-resolved simulations. The particle curtain upon interaction with the shock wave is expected to experience Kelvin-Helmholtz (KH) and Richtmyer-Meshkov (RM) instabilities. In the simulations evolving the particle cloud, the initial volume fraction profile matches with that of Sandia Multiphase Shock Tube experiments, and the shock Mach number is limited to M =1.66. Measurements of particle dispersion are made at different initial volume fractions. A detailed analysis of the influence of initial conditions on the evolution of the particle cloudis presented. The early time behavior of the models is studied in the fixed bed simulations at varying volume fractions and shock Mach numbers.The mean gas quantities are measured in the context of 1-way and 2-way coupled simulations. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, Contract No. DE-NA0002378.

  20. Flow of ferro-magnetic nanoparticles in a rotating system: a numerical investigation of particle shapes

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Abbasi, Adnan; Saba, Fitnat; Khan, Umar; Mohyud-Din, Syed Tauseef

    2018-02-01

    A colloidal suspension of ferromagnetic particles, sized approximately 10 nm, in a carrier fluid (normally water) is termed as a ferrofluid. These fluids carry a wide range of practical applications in biomedical sciences, such as, magnetic separation of cells, magnetic drug targeting, and hyperthermia etc. Consequent to these applications, a keen attention, as they have got in recent times, is very understandable. With a similar inspiration, we present this work which investigates the flow of a Fe3O4-H2O magneto-nanofluid over a flat surface, in a rotating frame. A magnetic field, of strength {B}0 , has been imposed perpendicular to the surface. To see the effects of shape of ferromagnetic particles, a well-known Hamilton and Crosser's model has been used to formulate the governing equations. A numerical solution to the problem is obtained to simulate the flow behavior graphically. Variations in local Nusselt number and the coefficient of skin friction have also been captured. To validate the numerical scheme, a comparison with already existing result, simpler cases of the same problem, has also been made. A list of core findings is placed at the closure of this manuscript, in the conclusion section.

Top